Design High-Performing Architectures Section Exam

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

A company has developed public APIs hosted in Amazon EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service clients can only access trusted IP addresses whitelisted on their firewalls. What should you do to accomplish the above requirement? Associate an Elastic IP address to a Network Load Balancer.

A Network Load Balancer (NLB) operates at the fourth layer of the OSI model and can manage millions of requests per second. It directs connection requests to a target selected from the default rule's target group, working with TCP connections as specified in the listener configuration. To meet the scenario's requirement of allowing only access to trusted IP addresses, the Bring Your Own IP (BYOIP) feature can be utilized to associate trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer. This avoids the need to modify whitelists with new IP addresses. Therefore, the right solution is: Associate an Elastic IP address to a Network Load Balancer. Associating an Elastic IP address to an Application Load Balancer is not viable as it is unassignable to an Application Load Balancer. However, one could assign an Elastic IP address to a Network Load Balancer positioned in front of the Application Load Balancer. Creating a CloudFront distribution with origins pointing to the private IPs of web servers doesn't meet the requirement, as the prompt solution is to attach an Elastic IP address to a Network Load Balancer. Creating an Alias Record in Route 53 mapping to the DNS name of the load balancer isn't adequate either as the access restriction based on trusted IP addresses on firewalls remains unresolved.

A technology company is building a new cryptocurrency trading platform that allows the buying and selling of Bitcoin, Ethereum, Ripple, Tether, and many others. You were hired as a Cloud Engineer to build the required infrastructure needed for this new trading platform. On your first week at work, you started to create CloudFormation YAML scripts that define all of the needed AWS resources for the application. Your manager was shocked that you haven't created the EC2 instances, S3 buckets, and other AWS resources straight away. He does not understand the text-based scripts that you have done and has asked for your clarification. In this scenario, what are the benefits of using the Amazon CloudFormation service that you should tell your manager to clarify his concerns? (Select TWO.) Enables modeling, provisioning, and version-controlling of your entire AWS infrastructure Allows you to model your entire infrastructure in a text file

AWS CloudFormation offers a unified language to describe and provision all the infrastructure resources in your AWS environment. It allows you to model and provision resources in an automated and secure way using a simple text file. This file acts as the single source of truth for your cloud environment. While AWS CloudFormation is available at no additional charge, users only pay for the AWS resources required to run their applications. Hence, the correct perspectives are: - Enables modeling, provisioning, and version-controlling of your entire AWS infrastructure. - Allows you to model your entire infrastructure in a text file. Incorrect Options: - The option mentioning it provides highly durable and scalable data storage is incorrect as CloudFormation is not a data storage service. - The one stating it is a storage location for the application code is incorrect because CloudFormation does not store application code; CodeCommit does. - The option asserting that using CloudFormation itself is free, including the AWS resources created, is incorrect as users must pay for the AWS resources they create, even though CloudFormation is free.

A new online banking platform has been re-designed to have a microservices architecture in which complex applications are decomposed into smaller, independent services. The new platform uses Kubernetes, and the application containers are optimally configured for running small, decoupled services. The new solution should remove the need to provision and manage servers, let you specify and pay for resources per application as well as improve security through application isolation by design. Which of the following is the MOST suitable solution to implement to launch this new platform to AWS? Use AWS Fargate on Amazon EKS with Service Auto Scaling to run the containerized banking platform

AWS Fargate is a serverless compute engine for containers that is compatible with Amazon ECS and EKS, allowing developers to focus on building applications by removing the need to provision and manage servers. It offers resource specification per application, enhancing security through application isolation. With Fargate, users pay only for the resources used by their containers, avoiding over-provisioning and additional server costs. It provides each task or pod its own isolated compute environment, ensuring workload isolation and inherent security. Given these features, the apt choice is to: Use AWS Fargate on Amazon EKS with Service Auto Scaling for running the containerized banking platform. The option using Amazon ECS for running Kubernetes clusters on AWS Fargate is incorrect, as ECS is ideal for Docker containers, not Kubernetes, necessitating the use of Amazon EKS instead. Hosting the application in Amazon EMR Serverless with an EBS storage featuring fast snapshot restore is also unsuitable. Despite its server management benefits, this option involves extensive additional configuration to run a Kubernetes cluster, making Amazon EKS + AWS Fargate a more streamlined solution. The snapshot restore feature is unwarranted, lacking any data replication or high RTO/RPO requirement. Deploying an Amazon EKS Cluster on AWS Outposts is incorrect, as maintaining a physical rack server contradicts the serverless criterion. Additionally, Amazon AppFlow primarily integrates 3rd party SaaS solutions and doesn't pertain to orphaned Kubernetes pods.

A company has hundreds of VPCs with multiple VPN connections to their data centers spanning 5 AWS Regions. As the number of its workloads grows, the company must be able to scale its networks across multiple accounts and VPCs to keep up. A Solutions Architect is tasked to interconnect all of the company's on-premises networks, VPNs, and VPCs into a single gateway, which includes support for inter-region peering across multiple AWS regions. Which of the following is the BEST solution that the architect should set up to support the required interconnectivity? Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.

AWS Transit Gateway is a service allowing customers to connect their Amazon VPCs and on-premises networks to one gateway, streamlining management and reducing operational costs. It provides a central hub (Transit Gateway) to control how traffic is routed among all connected networks, which operate as spokes. Key Points: Central Connectivity Hub: It enables connection of multiple Amazon VPCs, on-premises data centers, or remote offices, acting as a hub to simplify management and route traffic effectively among connected networks. Scalability: It is crucial for scaling networks across multiple accounts and Amazon VPCs efficiently. Operational Efficiency: It alleviates the cumbersome nature of managing point-to-point connectivity and reduces operational costs by serving as a central connection point. Layer 3 Routing: It operates at layer 3, routing packets to specific next-hop attachments based on their destination IP addresses. Elastic Scaling: It scales elastically depending on the volume of network traffic. Attachments: Various resources like VPCs, VPN connections, AWS Direct Connect gateways, and transit gateway peering connections can be attached to your transit gateway. Inter-Region Connections: Setting up an AWS Transit Gateway in each region interconnects all networks within it, and traffic is routed between transit gateways through peering connections. Summary: The use of AWS Transit Gateway is optimal for consolidating numerous VPCs, VPNs, and on-premises networks, allowing for streamlined management, enhanced scalability, and operational efficiency in routing traffic through a central hub, particularly when handling extensive and complex network architectures. Other options like AWS Direct Connect Gateway, inter-region VPC peering, and AWS VPN CloudHub are not as efficient or comprehensive in managing multifaceted network connectivity and do not fulfill the requirements of interconnecting all networks into a single gateway with support for inter-region peering.

A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user's login time to further optimize the system. Which of the following options should you use together to set up a cost-effective solution that can improve your application's performance? (Select TWO.) Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses. Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users.

Absolutely, you've provided a thorough and accurate analysis regarding the usage of Lambda@Edge and how it can be employed in conjunction with Amazon CloudFront to resolve performance issues, especially with global users. Here's a condensed version of your assessment: Lambda@Edge, integrated with CloudFront, enables the customization of content delivered through CloudFront distributions by executing Lambda functions closer to end-users, mitigating latency. This method is particularly effective for implementing authentication processes and other content customizations. Moreover, establishing an origin failover with CloudFront, by designating a primary and a secondary origin, can offer resilience against HTTP 504 errors by automatically switching to the secondary origin when the primary one fails, enhancing the reliability of content delivery. Conversely, strategies such as configuring multiple VPCs across regions connected via transit VPC, extending max-age in Cache-Control directive for improving cache hit ratio, or deploying the application in multiple regions with Route 53 latency routing policy, might be considerable but are not cost-effective and optimal in addressing the issues highlighted in the scenario, specifically focusing on the sluggish authentication process and occasional HTTP 504 errors. Therefore, implementing Lambda@Edge for content customization and authentication processes, coupled with origin failover in CloudFront, stands out as the most feasible and effective approach to address performance and reliability issues with minimal costs, without having to overhaul the application deployment architecture.

A startup needs to use a shared file system for its .NET web application running on an Amazon EC2 Windows instance. The file system must provide a high level of throughput and IOPS that can also be integrated with Microsoft Active Directory. Which is the MOST suitable service that you should use to achieve this requirement? Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed, reliable, and scalable file storage that is accessible over the standard SMB protocol. It's built on Windows Server, offering administrative features like user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx supports Microsoft's Distributed File System (DFS) Namespaces which allows scaling out performance across multiple file systems in the same namespace up to tens of Gbps and millions of IOPS, meeting the requirement for high levels of throughput and IOPS and making it suitable for scenarios needing a file system with Active Directory integration. Given the scenario's emphasis on "file system", "Active Directory integration", and the need for high throughput and IOPS, the correct solution to implement is: Amazon FSx for Windows File Server. Amazon EBS Provisioned IOPS SSD volumes are incorrect as they are block storage volumes, not a comprehensive file system. Amazon Elastic File System (EFS) is incorrect for this scenario as it is compatible with Linux workloads, not Windows. AWS Storage Gateway - File Gateway is also not optimal as, despite its integration capability with Microsoft Active Directory and serving as a shared file system, Amazon FSx surpasses it in terms of throughput and IOPS.

A Solutions Architect is migrating several Windows-based applications to AWS that require a scalable file system storage for high-performance computing (HPC). The storage service must have full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS). Which of the following is the MOST suitable storage service that the Architect should use to fulfill this scenario? Amazon FSx for Windows File Server

Amazon FSx offers fully managed third-party file systems with native compatibility and feature sets optimized for various workloads like Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). It automates administrative tasks such as hardware provisioning, software configuration, patching, and backups, integrating these file systems with cloud-native AWS services. Amazon FSx presents two file systems: Amazon FSx for Windows File Server: Optimized for Windows-based applications and "lift-and-shift" business-critical application workloads, accessible via the SMB protocol. Amazon FSx for Lustre: Suitable for compute-intensive workloads with performance optimization, used for high-performance computing, and links with Amazon S3 for input and output storage. For Windows-based applications, the correct choice is: Amazon FSx for Windows File Server. Incorrect Options: Amazon S3 Glacier Deep Archive: Used for data archiving and long-term backup, not suitable for the described scenario. AWS DataSync: Mainly utilized for moving large amounts of data online between on-premises storage and Amazon S3 or Amazon EFS. Amazon FSx for Lustre: Not supportive of Windows-based applications and Windows servers.

A data analytics company is setting up an innovative checkout-free grocery store. Their Solutions Architect developed a real-time monitoring application that uses smart sensors to collect the items that the customers are getting from the grocery's refrigerators and shelves then automatically deduct it from their accounts. The company wants to analyze the items that are frequently being bought and store the results in S3 for durable storage to determine the purchase behavior of its customers. What service must be used to easily capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk? Amazon Kinesis Data Firehose

Amazon Kinesis Data Firehose is the most straightforward method for loading streaming data into different data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, thus facilitating near real-time analytics with pre-existing business intelligence tools and dashboards. It is a fully managed service that auto-scales to align with your data throughput and doesn't necessitate ongoing administration. It enhances security and optimizes storage use by enabling data to be batched, compressed, and encrypted before loading. For implementing real-time processing of streaming big data, you can use Amazon Kinesis Data Firehose alongside Amazon Kinesis Data Streams, which offers ordering of records and the ability to read and/or replay records in order to multiple Amazon Kinesis Applications. In contrast, Amazon Simple Queue Service (SQS) is a reliable, scalable hosted queue that stores messages, aiding in moving data between distributed application components and facilitating the building of applications where messages are processed independently. Thus, the right choice is: Amazon Kinesis Data Firehose. Amazon Kinesis encompasses various services under its umbrella: Kinesis Data Firehose, Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics, but for the specified scenario, Kinesis Data Firehose is apt. Amazon Redshift primarily focuses on data warehousing and isn't suitable for loading and streaming data into data stores for analytics. Amazon SQS lacks the capability to capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk, making Kinesis Data Firehose the right tool for such needs.

A company has a cryptocurrency exchange portal that is hosted in an Auto Scaling group of EC2 instances behind an Application Load Balancer and is deployed across multiple AWS regions. The users can be found all around the globe, but the majority are from Japan and Sweden. Because of the compliance requirements in these two locations, you want the Japanese users to connect to the servers in the ap-northeast-1 Asia Pacific (Tokyo) region, while the Swedish users should be connected to the servers in the eu-west-1 EU (Ireland) region. Which of the following services would allow you to easily fulfill this requirement? Use Route 53 Geolocation Routing policy.

Amazon Route 53 Geolocation Routing allows you to route traffic based on the geographic location of your users, making it an excellent solution for serving localized content or for routing user requests to endpoints in a geographically proximate AWS region. This ensures minimal latency and a better user experience as the requests are handled by the nearest resources. Application Load Balancers are indeed designed to distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones, and they don't route traffic across different AWS regions. For global traffic distribution across regions, services like Amazon Route 53 are more suitable. Amazon CloudFront with geo-restriction enabled is used to restrict access to content based on the geographic location of the viewer, not for routing the user's request to different geographic locations. Route 53 Weighted Routing policy allows you to split traffic based on different weights assigned to multiple resources but it doesn't consider the geographic location of the users. Thus, to efficiently and effectively serve users based on their geographical locations, Amazon Route 53's Geolocation Routing policy is the optimal choice, allowing for localized content delivery and potentially better performance due to reduced latency.

A leading e-commerce company is in need of a storage solution that can be simultaneously accessed by 1000 Linux servers in multiple availability zones. The servers are hosted in EC2 instances that use a hierarchical directory structure via the NFSv4 protocol. The service should be able to handle the rapidly changing data at scale while still maintaining high performance. It should also be highly durable and highly available whenever the servers will pull data from it, with little need for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement? Amazon EFS

Amazon Web Services (AWS) provides various cloud storage services like EFS, S3, and EBS, each suitable for different storage workloads. Understanding the appropriateness of Amazon EFS, Amazon S3, and Amazon Elastic Block Store (EBS) is crucial for specific workloads, especially when dealing with rapidly changing data and numerous servers, as in a scenario with 1000 Linux servers. Amazon EFS is suitable for use with Amazon EC2, providing a file system interface, strong consistency, file locking, and concurrently-accessible storage for up to thousands of Amazon EC2 instances. It's especially apt for scenarios requiring a POSIX-compatible file system and those involving rapidly changing data. Amazon EBS provides block-level storage and is optimal for workloads that require the lowest-latency access to data from a single EC2 instance. EBS Volumes cannot be shared by multiple instances. Amazon S3 is a high-availability object storage service but lacks strong consistency and file locking features of EFS, making it unsuitable for rapidly changing data. In the scenario involving rapidly changing data and 1000 Linux servers, Amazon EFS is the best choice due to its performance, durability, high availability, and the provision of concurrently-accessible storage, suitable for the needs of numerous Linux servers. Incorrect options: Amazon S3: Inappropriate for storing rapidly changing data due to lack of strong consistency and file locking. Amazon EBS: Not suitable as it can't be shared by multiple instances. Amazon FSx for Windows File Server: Incompatible as it is constrained to Windows OS, and the scenario specifies Linux servers.

The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch. Which of the following is a custom metric in CloudWatch which you have to manually set up? Memory Utilization of an EC2 instance

CloudWatch has available Amazon EC2 Metrics for you to use for monitoring. CPU Utilization identifies the processing power required to run an application upon a selected instance. Network Utilization identifies the volume of incoming and outgoing network traffic to a single instance. Disk Reads metric is used to determine the volume of the data the application reads from the hard disk of the instance. This can be used to determine the speed of the application. However, there are certain metrics that are not readily available in CloudWatch such as memory utilization, disk space utilization, and many others which can be collected by setting up a custom metric. You need to prepare a custom metric using CloudWatch Monitoring Scripts which is written in Perl. You can also install CloudWatch Agent to collect more system-level metrics from Amazon EC2 instances. Here's the list of custom metrics that you can set up: - Memory utilization- Disk swap utilization- Disk space utilization- Page file utilization- Log collection

An AI-powered Forex trading application consumes thousands of data sets to train its machine learning model. The application's workload requires a high-performance, parallel hot storage to process the training datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit. Which of the following Amazon storage services should the developer use? Use Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively.

Hot storage holds frequently accessed data, warm storage keeps less frequently accessed data, and cold storage maintains rarely accessed data. Pricing-wise, colder storage is cheaper to store but more expensive to access. Amazon FSx For Lustre is a high-performance parallel file system optimized for processing speed, while Amazon S3 provides object storage with multiple storage tiers, including cold storage options like Glacier and Glacier Deep Archive. Key Points: Amazon FSx For Lustre: High-performance storage ideal for data-intensive workloads. Amazon S3: Offers various storage tiers suited for different access frequencies, including cold storage solutions. Requirements: A system needs high-performance hot storage for dataset processing and cost-effective cold storage for infrequent access. Summary: For high-performance, parallel storage needs, Amazon FSx For Lustre is recommended. For cost-effective, rarely accessed storage, Amazon S3's Glacier/Deep Archive options are suitable. Thus, using Amazon FSx For Lustre and Amazon S3 provides an efficient solution for both hot and cold storage needs. Other combinations, such as Amazon EBS Provisioned IOPS SSD (io1) for cold storage, Amazon Elastic File System for hot storage, or Amazon FSx For Windows File Server for hot storage, don't meet the required specifications or aren't as cost-effective.

A company requires corporate IT governance and cost oversight of all of its AWS resources across its divisions around the world. Their corporate divisions want to maintain administrative control of the discrete AWS resources they consume and ensure that those resources are separate from other divisions. Which of the following options will support the autonomy of each corporate division while enabling the corporate IT to maintain governance and cost oversight? (Select TWO.) Use AWS Consolidated Billing by creating AWS Organizations to link the divisions' accounts to a parent corporate account. Enable IAM cross-account access for all corporate IT administrators in each child account.

IAM roles can be used to delegate access to resources across different AWS accounts you own, eliminating the need to create individual IAM users in each account and allowing users to access resources without having to switch accounts. AWS Organizations, along with its consolidated billing feature, allows for a unified view of charges incurred by all your accounts, with detailed cost reports for each, at no additional charge, offering a solution for maintaining governance and cost oversight across various corporate divisions. Hence, the optimized solutions are: Enable IAM cross-account access for all corporate IT administrators in each child account. Use AWS Consolidated Billing within AWS Organizations to link the divisions' accounts to a parent corporate account for effective governance and cost oversight. Incorrect Approaches: AWS Trusted Advisor and AWS Resource Groups Tag Editor: These tools offer alerts on adherence to best practices and allow tagging of AWS resources but do not assist in maintaining governance over AWS accounts. Creating separate VPCs and launching an AWS Transit Gateway: This does not separate the divisions in terms of billing and administrative control and is not suitable for the objective of maintaining administrative control of AWS resources. Creating separate Availability Zones and using the AWS Global Accelerator: Availability Zones are predefined by AWS, and having separate AZs does not support the autonomy of each corporate division. The AWS Global Accelerator optimizes network paths to your applications and is not meant for communication between Availability Zones.

A startup plans to develop a multiplayer game that uses UDP as the protocol for communication between clients and game servers. The data of the users will be stored in a key-value store. As the Solutions Architect, you need to implement a solution that will distribute the traffic across a number of servers. Which of the following could help you achieve this requirement? Distribute the traffic using Network Load Balancer and store the data in Amazon DynamoDB

Key Points: A Network Load Balancer operates at the fourth layer of the OSI model and is suited to handle millions of requests per second, making it suitable for UDP traffic, which is a Layer 4 traffic. Amazon DynamoDB is appropriate for storing user data as it supports both document and key-value store models, catering to the requirement of storing data in a key-value store for the scenario provided. Application Load Balancer is inappropriate for UDP traffic as it only supports application traffic (Layer 7). Amazon Aurora and Amazon RDS are not suitable choices for storing data in key-value format. Summary: For a startup creating a multiplayer game utilizing UDP for communications, it's pivotal to use a Network Load Balancer for distributing traffic due to its capability to handle Layer 4 traffic effectively. The user data, which needs to be stored in a key-value store, should be placed in Amazon DynamoDB because of its compatibility with key-value and document store models. Utilizing Application Load Balancer and relational database services like Amazon Aurora or Amazon RDS would be inappropriate in this scenario.

A solutions architect is in charge of preparing the infrastructure for a serverless application. The application is built from a Docker image pulled from an Amazon Elastic Container Registry (ECR) repository. It is compulsory that the application has access to 5 GB of ephemeral storage. Which action satisfies the requirements? Deploy the application to an Amazon ECS cluster that uses Fargate tasks.

Key Points: AWS Fargate is a serverless compute engine for containers, compatible with both Amazon ECS and EKS, allowing developers to focus on building applications without managing servers. It allocates the right amount of compute and improves security through application isolation. Fargate provides a minimum of 20 GiB of free ephemeral storage, allowing cost-effectiveness as you only pay for the resources used by the containers, avoiding over-provisioning. Deploying applications on Fargate is suitable when serverless architecture is required, and it meets the storage requirement in scenarios where minimal ephemeral storage is needed. Summary: In scenarios where serverless architecture and minimal storage are prerequisites, AWS Fargate stands out as the optimal choice due to its compatibility with Amazon ECS and EKS, serverless nature, and provision of a minimum of 20 GiB of free ephemeral storage. It allows developers to concentrate on application development without the hassles of server management and resource allocation, offering enhanced security and cost-effectiveness by eliminating over-provisioning and enabling payment only for resources utilized by the containers. Options involving Lambda functions with Container image support or deploying applications on Amazon ECS clusters with EC2 worker nodes do not align with the serverless architecture requirement.

A leading IT consulting company has an application which processes a large stream of financial data by an Amazon ECS Cluster then stores the result to a DynamoDB table. You have to design a solution to detect new entries in the DynamoDB table then automatically trigger a Lambda function to run some tests to verify the processed data. What solution can be easily implemented to alert the Lambda function of new entries while requiring minimal configuration change to your architecture? Enable DynamoDB Streams to capture table activity and automatically trigger the Lambda function.

Key Points: Amazon DynamoDB can be integrated with AWS Lambda to create triggers which allow applications to automatically respond to events in DynamoDB Streams, making applications reactive to data modifications in DynamoDB tables. Enabling DynamoDB Streams on a table allows association of the stream ARN with a Lambda function. When an item in the table is modified, AWS Lambda polls the stream and synchronously invokes the associated Lambda function when it detects new stream records. Lambda functions can perform specific actions like sending notifications or initiating workflows, and they can be set to respond to specific modifications on DynamoDB tables, enabling precise, event-driven functionalities. The use of DynamoDB Streams with Lambda is efficient and requires minimal configuration change, meeting requirements with simplicity and specificity. Other services like CloudWatch Alarms, SNS, and Systems Manager Automation are not suitable as they do not monitor changes in DynamoDB table data, necessitate unnecessary configurations, or do not have the capability to detect new entries in a DynamoDB table, respectively. Summary: For applications that need to react to data modifications in DynamoDB tables, the integration of Amazon DynamoDB with AWS Lambda through DynamoDB Streams is the optimal choice. This combination allows the creation of efficient, event-driven applications with minimal configuration, allowing for specific, automatic responses to changes in table data, unlike other services which lack this specificity and efficiency.

A Solutions Architect needs to deploy a mobile application that collects votes for a singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available database which will be queried for real-time ranking. The database is expected to undergo frequent schema changes throughout the voting period. Which of the following combination of services should the architect use to meet this requirement? Amazon DynamoDB and AWS AppSync

Key Points: Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database ideal for high-performance applications and is particularly suitable for handling data with frequently changing schemas due to its schemaless nature, offering flexibility, durability, scalability, and availability. DynamoDB provides real-time tabulation and can be integrated with AWS AppSync for building collaborative apps with real-time data updates, facilitating the construction of apps that can interact with various AWS services effortlessly. Other AWS services like Amazon DocumentDB and Amazon AppFlow are not suitable as they do not provide the required interface or functionality needed for the scenario mentioned. Similarly, Amazon Relational Database Service (RDS), Amazon MQ, Amazon Aurora, and Amazon Cognito are also not apt solutions due to their inherent limitations and functionalities, which do not align with handling data with frequently changing schema or the real-time updating needs of the scenario. Summary: For applications requiring high performance, scalability, and real-time updates with frequently changing schemas, Amazon DynamoDB coupled with AWS AppSync is the optimal choice. This combo allows for seamless real-time data updates and interactions with various AWS services. In contrast, other AWS services lack the necessary compatibility, flexibility, or functionalities needed to manage such dynamic and real-time scenarios efficiently.

A company is receiving semi-structured and structured data from different sources every day. The Solutions Architect plans to use big data processing frameworks to analyze vast amounts of data and access it using various business intelligence tools and standard SQL queries. Which of the following provides the MOST high-performing solution that fulfills this requirement? Create an Amazon EMR cluster and store the processed data in Amazon Redshift.

Key Points: Amazon EMR is a managed cluster platform allowing the running of big data frameworks like Apache Hadoop and Apache Spark for processing and analyzing extensive data. It's suitable for analytics and business intelligence workloads using associated open-source projects like Apache Hive and Apache Pig. Amazon Redshift is a widely used cloud data warehouse, optimal for analyzing data using standard SQL and existing BI tools. It supports complex analytic queries against vast amounts of structured and semi-structured data due to its advanced features like query optimization, columnar storage on high-performance storage, and massively parallel query execution. Summary: For leveraging big data processing frameworks and conducting analytics using various BI tools and standard SQL queries, utilizing Amazon EMR is pivotal. It facilitates effective data transformations (ETL) and can work cohesively with Amazon Redshift to serve analytic and business intelligence applications, making it a fitting solution to store processed data for analytics purposes. Alternatives like AWS Glue, storing data in Amazon S3, Amazon Kinesis Data Analytics with Amazon DynamoDB, or using an Amazon EC2 instance with Amazon EBS are not optimal due to limitations in utilizing big data frameworks effectively, lack of support for standard SQL and BI tools, limitations in computing capability, and the requirement of administrative overhead.

A company plans to launch an application that tracks the GPS coordinates of delivery trucks in the country. The coordinates are transmitted from each delivery truck every five seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. The aggregated data will be analyzed in a separate reporting application. Which AWS service should you use for this scenario? Amazon Kinesis

Key Points: Amazon Kinesis is designed to facilitate the collection, processing, and analysis of real-time, streaming data, enabling users to gain insights and respond promptly to new information. It provides the capacity to process streaming data cost-effectively at any scale and offers the flexibility to select the most suitable tools for the application's needs. Kinesis can ingest a variety of real-time data, such as video, audio, application logs, website clickstreams, and IoT telemetry data, which can be leveraged for machine learning, analytics, and more. The service allows for the instantaneous processing and analysis of data as it arrives, enabling immediate response, contrasting with models where processing only begins after complete data collection. Summary: Amazon Kinesis is instrumental in providing real-time data processing and analytics solutions, allowing immediate insights and reactions by handling diverse streaming data effectively and flexibly at any scale. This approach contrasts with traditional methods where data processing is deferred until the entire dataset is available.

A company launched a website that accepts high-quality photos and turns them into a downloadable video montage. The website offers a free and a premium account that guarantees faster processing. All requests by both free and premium members go through a single SQS queue and then processed by a group of EC2 instances that generate the videos. The company needs to ensure that the premium users who paid for the service have higher priority than the free members. How should the company re-design its architecture to address this requirement? Create an SQS queue for free members and another one for premium members. Configure your EC2 instances to consume messages from the premium queue first and if it is empty, poll from the free members' SQS queue

Key Points: Amazon Simple Queue Service (SQS) is a fully managed service used to decouple and scale microservices, and it allows for the sending, storing, and receiving of messages between software components without losing messages. SQS eliminates the complexity associated with managing message-oriented middleware, allowing developers to focus more on differentiating work. For handling different types of members, creating two separate SQS queues, one for premium members and one for free members, is optimal. The EC2 instances should be configured to consume messages from the premium queue first before polling from the free members' queue. Setting a priority for individual items in the SQS queue is not possible. Amazon Kinesis is suitable for processing streaming data, and Amazon S3 is mainly for durable storage, not for processing data. Summary: Amazon SQS is a managed message queuing service that facilitates the management of microservices by removing the overhead and complexity of middleware management. In situations requiring differentiated handling, such as for premium and free members, utilizing two distinct SQS queues is beneficial, with EC2 instances polling premium queues prior to free member queues. This approach is apt as individual priority settings within an SQS queue are unfeasible, and alternatives like Amazon Kinesis and Amazon S3 are inappropriate for this scenario due to their respective real-time streaming data processing and durable storage capabilities.

A fast food company is using AWS to host their online ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availability Zones with an Application Load Balancer in front. To better handle the incoming traffic from various digital devices, you are planning to implement a new routing system where requests which have a URL of <server>/api/android are forwarded to one specific target group named "Android-Target-Group". Conversely, requests which have a URL of <server>/api/ios are forwarded to another separate target group named "iOS-Target-Group". How can you implement this change in AWS? Use path conditions to define rules that forward requests to different target groups based on the URL in the request.

Key Points: An Application Load Balancer (ALB) can route requests to different services based on the content of the request, like the URL path, making it suitable for applications composed of several individual services. Path-based routing allows routing of client requests based on the URL path of the HTTP header, and it is apt for scenarios where different services are accessed via different path patterns in the URL. Path patterns are case-sensitive and can include wildcard characters to match various paths. Example patterns include /img/* and /js/*. Other load balancers like Gateway Load Balancer and Network Load Balancer are not suitable for path-based routing scenarios, and host conditions are for routing based on the hostname, not the URL path. Summary: For applications consisting of multiple services where routing needs to occur based on the URL path, using path conditions with an Application Load Balancer is the optimal solution, enabling the definition of rules for forwarding requests to different target groups based on the URL in the request. Using other types of load balancers or relying on host conditions would not address the specific needs of path-based routing in such scenarios.

A company is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage the fleet of Amazon EC2 instances running in both the public and private subnets. The Solutions Architect has added a bastion host with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following bastion host deployment options will meet this requirement? Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses.

Key Points: Bastion Host: This is a special-purpose computer designed to withstand attacks, serving as a secure, accessible jump host in a network. Deployment Location: The bastion host, essentially an EC2 instance in AWS, should be deployed in a public subnet with a public or Elastic IP address. Access Control: Access to the bastion host should be tightly controlled, typically allowing RDP access (for Windows) or SSH (for Linux) only from known, secure IP addresses, like those of a corporate network. Usage: Users log onto the bastion host to manage other hosts located in private subnets within the network. Summary: To securely manage network resources, deploy a Windows Bastion host with an Elastic IP address in the public subnet, and strictly allow RDP access only from the corporate IP addresses. This approach leverages the security and accessibility of a bastion host while maintaining stringent access controls, unlike incorrect approaches that compromise security or functionality by misplacing the bastion host or misconfiguring access controls.

An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. The Operations team received 5 orders but after a few hours, they saw 20 email notifications in their inbox. Which of the following could be the possible culprit for this issue? The web application is not deleting the messages in the SQS queue after it has processed them.

Key Points: Messages in an Amazon SQS queue persist even after being processed by an EC2 instance and need to be explicitly deleted to avoid reprocessing once the visibility timeout expires. A distributed messaging system primarily comprises the components of the system, like EC2 instances, the queue which is distributed on Amazon SQS servers, and the messages within the queue. Messages, once sent to the queue, are redundantly distributed across Amazon SQS servers. They remain in the queue during processing and are not returned in subsequent receive requests during the visibility timeout. Deleting messages after processing is crucial to prevent them from being received and processed again. Summary: In distributed messaging with Amazon SQS, always delete messages post-processing to avoid redundancy, as messages continue to exist in the queue even after processing by EC2 instances. The architecture comprises EC2 instances, the queue hosted on multiple Amazon SQS servers for redundancy, and the messages in the queue. Additionally, concerns like long polling leading to messages being sent twice, short polling causing messages to be missed, or inadequate permissions restricting access to the SQS queue are misconceptions in this context.

An application is hosted in an Auto Scaling group of EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use? Step scaling

Step Scaling uses CloudWatch alarms and scaling adjustments, called step adjustments, to modify the capacity of a scalable target based on the size of the alarm breach. It responds to all alarms that are breached, even during ongoing scaling activities, allowing flexible response to changing demands. Dynamic Scaling requires configuration to scale in response to varying demand and is essential for maintaining resource balance, enabling additional capacity during traffic spikes without maintaining excessive idle resources. Amazon EC2 Auto Scaling supports several scaling policies including Target Tracking Scaling, Step Scaling, Simple Scaling, and Scheduled Scaling, each serving different needs. Target Tracking Scaling adjusts capacity based on a target value for a specific metric, Simple Scaling uses a single scaling adjustment, and Scheduled Scaling operates based on a predefined schedule. Summary: For maintaining a balance in resource utilization and handling varying demands efficiently, Step Scaling is apt as it utilizes step adjustments to modify capacity in response to the breach of defined alarms, providing dynamic adaptability. While other scaling policies like Target Tracking Scaling and Simple Scaling offer utility in specific scenarios, they don't provide the flexibility of adjustments based on the size of alarm breaches like Step Scaling does. Scheduled Scaling is not a form of dynamic scaling as it relies on predetermined schedules rather than real-time metrics and demands.

A Solutions Architect needs to set up the required compute resources for the application which have workloads that require high, sequential read and write access to very large data sets on local storage. Which of the following instance type is the most suitable one to use in this scenario? Storage Optimized Instances

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications. Hence, the correct answer is: Storage Optimized Instances. Memory Optimized Instances is incorrect because these are designed to deliver fast performance for workloads that process large data sets in memory, which is quite different from handling high read and write capacity on local storage. Compute Optimized Instances is incorrect because these are ideal for compute-bound applications that benefit from high-performance processors, such as batch processing workloads and media transcoding. General Purpose Instances is incorrect because these are the most basic type of instances. They provide a balance of compute, memory, and networking resources, and can be used for a variety of workloads. Since you are requiring higher read and write capacity, storage optimized instances should be selected instead.

An Auto Scaling group (ASG) of Linux EC2 instances has an Amazon FSx for OpenZFS file system with basic monitoring enabled in CloudWatch. The Solutions Architect noticed that the legacy web application hosted in the ASG takes a long time to load. After checking the instances, the Architect noticed that the ASG is not launching more instances as it should be, even though the servers already have high memory usage. Which of the following options should the Architect implement to solve this issue? Install the CloudWatch unified agent to the EC2 instances. Set up a custom parameter in AWS Systems Manager Parameter Store with the CloudWatch agent configuration to create an aggregated metric on memory usage percentage. Scale the Auto Scaling group based on the aggregated metric.

The Amazon CloudWatch agent is instrumental in collecting system metrics and log files from Amazon EC2 instances and on-premises servers, supporting both Windows Server and Linux. It's vital as by default, CloudWatch monitors metrics like CPU utilization, Network utilization, and Disk performance, but not memory usage. To collect and monitor custom metrics like memory usage, the installation of a CloudWatch agent on EC2 instances is necessary. This custom metric can then serve as a trigger for the Auto Scaling Group's scaling activities. Thus, the correct solution is to: Install the CloudWatch unified agent to the EC2 instances and set up a custom parameter in AWS Systems Manager Parameter Store with the CloudWatch agent configuration to create an aggregated metric on memory usage percentage. Then, scale the Auto Scaling group based on the aggregated metric. The option involving Amazon Comprehend and Amazon SageMaker is incorrect as Comprehend is an NLP service and doesn't track real-time memory usage, and SageMaker is not necessary as there's no machine learning requirement involved. The option involving detailed monitoring and Amazon Forecast is also incorrect. Detailed monitoring does not provide memory usage metrics, and Amazon Forecast is suited for business metrics analysis rather than for scaling Auto Scaling groups based on aggregated metrics. Finally, the use of Amazon Rekognition and the AWS Well-Architected Tool is incorrect. Rekognition is for image recognition and can't track high memory usage of EC2 instances. The AWS Well-Architected Tool is designed for reviewing the state of applications and providing architectural best practices, not for triggering scaling activities based on memory usage.

A large financial firm in the country has an AWS environment that contains several Reserved EC2 instances hosting a web application that has been decommissioned last week. To save costs, you need to stop incurring charges for the Reserved instances as soon as possible. What cost-effective steps will you take in this circumstance? (Select TWO.) Terminate the Reserved instances as soon as possible to avoid getting billed at the on-demand price when it expires. Go to the AWS Reserved Instance Marketplace and sell the Reserved instances.

The Reserved Instance Marketplace is a platform that supports the sale of third-party and AWS customers' unused Standard Reserved Instances, which vary in terms of lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity. Hence, the correct answers are: - Go to the AWS Reserved Instance Marketplace and sell the Reserved instances. - Terminate the Reserved instances as soon as possible to avoid getting billed at the on-demand price when it expires. To deal with unused Reserved Instances, it's essential to use the appropriate channels like the AWS Reserved Instance Marketplace for selling, and it's pivotal to understand that merely stopping instances doesn't negate associated costs if they have related resources or when they revert to on-demand pricing post-expiration. Canceling the AWS subscription entirely is not a requisite step in this process.

A company plans to launch an Amazon EC2 instance in a private subnet for its internal corporate web portal. For security purposes, the EC2 instance must send data to Amazon DynamoDB and Amazon S3 via private endpoints that don't pass through the public Internet. Which of the following can meet the above requirements? Use VPC endpoints to route all access to S3 and DynamoDB via private endpoints

VPC Endpoint allows private connections to AWS services without needing an Internet gateway, NAT device, VPN connection, or AWS Direct Connect, ensuring instances in your VPC can communicate with resources without requiring public IP addresses, and the traffic doesn't leave the Amazon network. Key Points: Private Connection: It enables you to connect your VPC to supported AWS services privately, allowing communication without exposure to the public internet. Security and Privacy: Instances in your VPC do not require public IP addresses, ensuring the traffic between your VPC and the services doesn't leave the Amazon network. Application: In scenarios requiring configuration of private endpoints to send data to Amazon DynamoDB and Amazon S3, a VPC endpoint is the most suitable service. Summary: The use of VPC Endpoints is optimal for securely and privately connecting to services like DynamoDB and S3 within the Amazon network, preventing exposure to the public internet and eliminating the need for public IP addresses for instances in your VPC. Other options such as AWS Transit Gateway, AWS Direct Connect, and AWS VPN CloudHub do not serve the purpose of creating such private endpoints for accessing AWS services within the Amazon network, as they are designed for different connectivity and network integration scenarios.

A company plans to use Route 53 instead of an ELB to load balance the incoming request to the web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to each instance. Which routing policy would you use? Weighted

Weighted routing in AWS allows the association of multiple resources with a single domain or subdomain name and controls the distribution of traffic amongst these resources based on user-defined weights. This is valuable for load balancing and software version testing, as it enables the specification of exact proportions of traffic to be allocated to each resource. Key Example: If you assign weights of 1 and 255 to two resources, the first resource receives 1/256th of the traffic, and the second one gets 255/256ths of the traffic. Adjusting these weights gradually changes the traffic balance. In Summary: The Weighted routing policy is optimal when a specific percentage of traffic needs to be routed to different resources. Other routing policies like Latency, Failover, and Geolocation do not offer the ability to set traffic percentages; they serve purposes like routing based on best latency, setting up active-passive failover configurations, and routing based on user locations, respectively.

A game development company operates several virtual reality (VR) and augmented reality (AR) games which use various RESTful web APIs hosted on their on-premises data center. Due to the unprecedented growth of their company, they decided to migrate their system to AWS Cloud to scale out their resources as well to minimize costs. Which of the following should you recommend as the most cost-effective and scalable solution to meet the above requirement? Use AWS Lambda and Amazon API Gateway.

With AWS Lambda, you only pay for what you use, being charged based on the number of requests for your functions and the duration it takes for your code to execute. Lambda considers each start of execution due to an event notification or invoke call, as a request. Duration is calculated from the time your code begins executing until it terminates, and pricing is dependent on the amount of memory allocated to your function. The Lambda free tier includes 1M free requests per month and over 400,000 GB-seconds of compute time per month. Therefore, using a combination of AWS Lambda and Amazon API Gateway is both scalable and cost-effective, charging you only when your Lambda function is used. Incorrect Options:ECS, ECR, and Fargate: Not scalable and mainly used for hosting Docker applications.S3 with CloudFront: Unsuitable, as S3 has no compute capability; it can only host static websites.Spot Fleet of EC2 instances with EFA and an Application Load Balancer: Not scalable without Auto Scaling, and more expensive compared to Lambda. So, the most efficient solution is to: Use AWS Lambda and Amazon API Gateway.


Ensembles d'études connexes

Informatics finals, Informatics Chp 2, Informatics Chp 3, Informatics Chp 4, Informatics Chp 5, Informatics Chp 6, Info Exam 2 Chap 7, Info Exam 2 Chap 8, Chap 11 Informatics, Info Exam 2 Chap 12, INFO Exam 3 Chap 13, INFO Exam 3 Chap 14, INFO Exam 3...

View Set

Face-Negotiation Theory: Chapter 32

View Set

ENGL 135 Ch 14 Documenting a Research Paper

View Set

Chapter 11: Breast Cancer Staging & Treatment

View Set

chrome-extension://bpmcpldpdmajfigpchkicefoigmkfalc/views/app.html

View Set

Steps of Marketing Research Process

View Set

Module 1 Quiz Introduction and Time

View Set