AWS_Solutions_Architect_Cert_Library_Questions

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

A legacy application needs to interact with local storage using iSCSI. A team needs to design a reliable storage solution to provision all new storage on AWS. Which storage solution meets the legacy application requirements? • A. AWS Snowball storage for the legacy application until the application can be re-architected. • B. AWS Storage Gateway in cached mode for the legacy application storage to write data to Amazon S3. • C. AWS Storage Gateway in stored mode for the legacy application storage to write data to Amazon S3. • D. An Amazon S3 volume mounted on the legacy application server locally using the File Gateway service.

*B. AWS Storage Gateway in cached mode for the legacy application storage to write data to Amazon S3.* Key word here is "interact." If it was back-up the answer would change to one of the other options, but if you need to interact with the date - you need to cache it. Page 270 *Gateway-Cached* volumes allow you to expand your local storage capacity into Amazon S3. All data stored on a Gateway-Cached volume is moved to Amazon S3, while recently read data is retained in local storage to provide low-latency. *USE CASES* *Gateway-Cached* volumes enable you to expand local storage hardware to Amazon S3, allowing you to store much more data without drastically increasing your storage hardware or changing your storage processes. *Gateway-Stored* volumes provide seamless, asynchronous, and secure backup of your on-premises storage without new processes or hardware. *Gateway-VTLs* enable you to keep your current tape backup software and processes while storing your data more cost-effectively and simply on the cloud.

Legacy applications currently send messages through a single Amazon EC2 instance, which then routes the messages to the appropriate destinations. The Amazon EC2 instance is a bottleneck and single point of failure, so the company would like to address these issues. Which services could address this architectural use case? *(Choose two)* • A. Amazon SNS • B. AWS STS • C. Amazon SQS • D. Amazon Route 53 • E. AWS Glue

A. Amazon SNS C. Amazon SQS

A Solutions Architect needs to use AWS to implement pilot light disaster recovery for a three-tier web application hosted in an on-premises datacenter. Which solution allows rapid provision of working, fully-scaled production environment? • A. Continuously replicate the production database server to Amazon RDS. Use AWS CloudFormation to deploy the application and any additional servers if necessary. • B. Continuously replicate the production database server to Amazon RDS. Create one application load balancer and register on-premises servers. Configure ELB Application Load Balancer to automatically deploy Amazon EC2 instances for application and additional servers if the on-premises application is down. • C. Use a scheduled Lambda function to replicate the production database to AWS. Use Amazon Route 53 health checks to deploy the application automatically to Amazon S3 if production is unhealthy. • D. Use a scheduled Lambda function to replicate the production database to AWS. Register on-premises servers to an Auto Scaling group and deploy the application and additional servers if production is unavailable.

A. Continuously replicate the production database server to Amazon RDS. Use AWS CloudFormation to deploy the application and any additional servers if necessary. I didn't know what pilot light was.... so: The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud. The idea of the pilot light is an analogy that comes from the gas heater. In a gas heater, a small flame that's always on can quickly ignite the entire furnace to heat up a house. CloudFormation is used to create a "technology stack" such as a 3 tier web application. You can basically deploy any service or resource in AWS using templates. Some common use cases are to deploy new environments for testing, replicate environments, launch new applications in new regions, etc.... Think of it as a templated (easy to configure and change) VPC full environment deployment tool.

Two Auto Scaling applications, Application A and Application B, currently run within a shared set of subnets. A Solutions Architect wants to make sure that Application A can make requests to Application B, but Application B should be denied from making requests to Application A. Which is the SIMPLEST solution to achieve this policy? • A. Using security groups that reference the security groups of the other application • B. Using security groups that reference the application server's IP addresses • C. Using Network Access Control Lists to allow/deny traffic based on application IP addresses • D. Migrating the applications to separate subnets from each other

A. Using security groups that reference the security groups of the other application Key Components 1.) Auto Scaling Applications 2.) Shared subnets 3.) SIMPLEST solution 1.) Auto Scaling means that IP addresses will change (as instances turn on or off they get assigned new IP addresses at random). This means that Option B and option C are out of the running because you can't block an IP you don't know. 2.) Option D recommends separating the applications on different subnets, but this doesn't solve the problem of communication. This is a partial solution and even then the IP address problem addressed in (1) would still be applicable. 3.) The simplest solution is to just change the security group in Answer A. Security groups are "stateful" and can define "allow" rules (cannot define deny rules). This means you set up a Security Group for A that allows access to Application B. This means that Application A can query and B can respond. However, Application B cannot initiate a query.

A company runs a legacy application with a single-tier architecture on an Amazon EC2 instance. Disk I/O is low, with occasional small spikes during business hours. The company requires the instance to be stopped from 8 PM to 8 AM daily. Which storage option is MOST appropriate for this workload? • A. Amazon EC2 instance storage • B. Amazon EBS General Purpose SSD (gp2) storage • C. Amazon S3 • D. Amazon EBS Provision IOPS SSD (io1) storage

B. Amazon EBS General Purpose SSD (gp2) storage The question is a little unclear, but let's pick it apart first based on requirements. 1.) Single-tier legacy application 2.) Disk I/O is low 3.) Instance stopped I think the biggest thing here is the I/O which immediately points to EBS and not S3 and the stopping instance is ok for EBS as it does not affect the EBS. Also, legacy applications have a tendency to be installed on either EFS or EBS depending on how many people/services need to utilize the application. S3 is not used for applications as it is object (think files) storage with key/value pairs. *EBS Characteristics* 1.) EBS and EFS are used for applications 3.) Stores data on a file system which is retained after the EC2 instance is shut down. Extra: Optimized for EC2. *S3 Characteristics* 1.) Not used for applications - used more for web 2.) Doesn't use I/O as it is object level storage 3.) Does not affect S3 Option D is out because a Provisioned IOPS would be used for "high" I/O and the requirement here is for "low"

A Solutions Architect is building an application that stores object data. Compliance requirements state that the data stored is immutable. Which service meets these requirements? • A. Amazon S3 • B. Amazon Glacier • C. Amazon EFS • D. AWS Storage Gateway

B. Amazon Glacier Page 37: Unlike an Amazon S3 object key, you cannot specify a user-friendly archive name [...] and all archives are *automatically encrypted* and archivese are *immutable* - after an archive is created, it cannot be modified. I found this one to be a little confusing. It is my understanding that the *data* is not immutable, but the *archive* itself is immutable (i.e. the name, location, etc...). I think this is misleading, but when you think "immutable" think of Glacier.

A Solutions Architect notices slower response times from an application. The CloudWatch metrics on the MySQL RDS indicate Read IOPS are high and fluctuate significantly when the database is under load. How should the database environment be re-designed to resolve the IOPS fluctuation? • A. Change the RDS instance type to get more RAM. • B. Change the storage type to Provisioned IOPS. • C. Scale the web server tier horizontally. • D. Split the DB layer into separate RDS instances.

B. Change the storage type to Provisioned IOPS. The question specifically says you need to resolve the IOPS fluctuation. The only solution that does this is the provisioned IOPS SSD which is "sensitive to storage performance and consistency in random access I/O throughput" (page 67). Provisioned IOPS SSD is for: 1.) Critical business operations requiring sustained IOPS 2.) Large database workloads

A Solutions Architect is defining a shared Amazon S3 bucket where corporate applications will save objects. How can the Architect ensure that when an application uploads an object to the Amazon S3 bucket, the object is encrypted? • A. Set a CORS configuration. • B. Set a bucket policy to encrypt all Amazon S3 objects. • C. Enable default encryption on the bucket. • D. Set permission for users.

B. Set a bucket policy to encrypt all Amazon S3 objects. https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/ By using an S3 bucket policy, you can enforce the encryption requirement when users upload objects, instead of assigning a restrictive IAM policy to all users. S3 does not come "default" with encryption, it has to be enabled via a policy. Please note that Glacier is encrypted by "default."

A Solutions Architect is designing an application that will encrypt all data in an Amazon Redshift cluster. Which action will encrypt the data at rest? • A. Place the Redshift cluster in a private subnet. • B. Use the AWS KMS Default Customer master key. • C. Encrypt the Amazon EBS volumes. • D. Encrypt the data using SSL/TLS.

B. Use the AWS KMS Default Customer master key. The key is the word "Redshift." This eliminates option C. Option D is referencing data in transit and subnets do not provide encryption (option A).

An application is running on an Amazon EC2 instance in a private subnet. The application needs to read and write data onto Amazon Kinesis Data Streams, and corporate policy requires that this traffic should not go to the internet. How can these requirements be met? • A. Configure a NAT gateway in a public subnet and route all traffic to Amazon Kinesis through the NAT gateway. • B. Configure a gateway VPC endpoint for Kinesis and route all traffic to Kinesis through the gateway VPC endpoint. • C. Configure an interface VPC endpoint for Kinesis and route all traffic to Kinesis through the gateway VPC endpoint. • D. Configure an AWS Direct Connect private virtual interface for Kinesis and route all traffic to Kinesis through the virtual interface.

C. Configure an interface VPC endpoint for Kinesis and route all traffic to Kinesis through the gateway VPC endpoint. The key here is service - Amazon Kinesis. A "*Gateway* VPC Endpoint" only supports connection to 2 services: Amazon S3 and DynamoDB. *For the test, just memorize these 2 services for Gateway VPC Endpoints. If it is not one of these 2, it is likely the Interface VPC Endpoint.* The "*Interface* VPC Endpoint" supports connection to all of the following services: Amazon API Gateway Amazon AppStream 2.0 AWS App Mesh Application Auto Scaling Amazon Athena AWS Auto Scaling Amazon Cloud Directory AWS CloudFormation AWS CloudTrail Amazon CloudWatch Amazon CloudWatch Events Amazon CloudWatch Logs AWS CodeBuild AWS CodeCommit AWS CodePipeline AWS Config AWS DataSync AWS Device Farm Amazon EC2 API Amazon EC2 Auto Scaling Amazon Elastic File System Elastic Load Balancing Amazon Elastic Container Registry Amazon Elastic Container Service Amazon EMR AWS Glue AWS Key Management Service Amazon Kinesis Data Firehose Amazon Kinesis Data Streams Amazon Managed Blockchain Amazon Quantum Ledger Database (Amazon QLDB) Amazon Rekognition Amazon SageMaker and Amazon SageMaker Runtime Amazon SageMaker Notebook AWS Secrets Manager AWS Security Token Service AWS Server Migration Service AWS Service Catalog Amazon SNS Amazon SQS AWS Step Functions AWS Systems Manager AWS Storage Gateway AWS Transfer for SFTP Amazon WorkSpaces Endpoint services hosted by other AWS accounts Supported AWS Marketplace Partner services

A Solutions Architect needs to design an architecture for a new, mission-critical batch processing billing application. The application is required to run Monday,Wednesday, and Friday from 5 AM to 11 AM. Which is the MOST cost-effective Amazon EC2 pricing model? • A. Amazon EC2 Spot Instances • B. On-Demand Amazon EC2 Instances • C. Scheduled Reserved Instances • D. Dedicated Amazon EC2 Instances

C. Scheduled Reserved Instances

A workload consists of downloading an image from an Amazon S3 bucket, processing the image, and moving it to another Amazon S3 bucket. An Amazon EC2 instance runs a scheduled task every hour to perform the operation. How should a Solutions Architect redesign the process so that it is highly available? • A. Change the Amazon EC2 instance to compute optimized. • B. Launch a second Amazon EC2 instance to monitor the health of the first. • C. Trigger a Lambda function when a new object is uploaded. • D. Initially copy the images to an attached Amazon EBS volume.

C. Trigger a Lambda function when a new object is uploaded. The key to this is that ayou are supposed to address "the process" to be redesigned. The process is the processing of the image which is clearly not very specific in how it works other than it downalods, processes, and moves the image. Page 10: AWS Lambda runs your back-end codce on its own AWS compute fleet of Amazon EC2 instances across multiple AZs in a region, which provides the *high availability*, security, performance, and scalability of the AWS infrastructure."

A Solutions Architect is designing an Amazon VPC. Applications in the VPC must have private connectivity to Amazon DynamoDB in the same AWS Region. The design should route DynamoDB traffic through: • A. VPC peering connection. • B. NAT gateway • C. VPC endpoint • D. AWS Direct Connect

C. VPC endpoint https://aws.amazon.com/blogs/database/how-to-configure-a-private-network-environment-for-amazon-dynamodb-using-vpc-endpoints/ The key words here are "private connectivity" and of course "same AWS Region." It doesn't say there are 2 VPCs. Page 93 states: "An Amazon VPC endpoint enables you to *create a private connection between your Amazon VPC and another AWS service* without requiring access over the Internet or through a NAT instance, VPN connection, or AWS Direct Connect. You can create multiple endpoins for a single service, and you can use different route tables to enforce different access policies from different subnets to the same service."

A Solutions Architect is designing a highly-available website that is served by multiple web servers hosted outside of AWS. If an instance becomes unresponsive, the Architect needs to remove it from the rotation. What is the MOST efficient way to fulfill this requirement? • A. Use Amazon CloudWatch to monitor utilization. • B. Use Amazon API Gateway to monitor availability. • C. Use an Amazon Elastic Load Balancer. • D. Use Amazon Route 53 health checks

There is debate on this question, but the only service that solves all 3 problems is Answer D: *• D. Use Amazon Route 53 health checks* Problems from Question: 1.) Highly-available 2.) Hosted outside of AWS 3.) Unresponsive = Architect needs to remove from rotation *Answer D - Correct Answer* https://aws.amazon.com/route53/faqs/ *Problem 1 Addressed*: Amazon Route 53 provides highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. *Problem 2 Addressed*: Route 53 effectively connects user requests to infrastructure running in AWS - such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets - and can also be used to route users to infrastructure outside of AWS. *Problem 3 Addressed*: Yes, it is possible to configure DNS Failover by associating health checks with resource record sets within a Private DNS hosted zone. If your endpoints are within a Virtual Private Cloud (VPC), you have several options to configure health checks against these endpoints. If the endpoints have public IP addresses, then you can create a standard health check against the public IP address of each endpoint. *Answer C does not address problem No. 2* Page 113 and Page 117 *Problem 1 Addressed*: ELB helps you achieve high availability for your applications by distributing traffic across healthy instances in multiple AZs (problem 1 addressed). *Problem 2*: NOT ADDRESSED - ELB cannot communicate *outside of AWS*. The service can only be accessed within the account. *Problem 3 Addressed*: Elastic Load Balancing supports health checks for Amazon EC2 instances to ensure traffic is not routed to unhealthy or failing instances. Also, ELB can automatically scale based on collected metrics.

An application tier currently hosts two web services on the same set of instances, listening on different ports. Which AWS service should a Solutions Architect use to route traffic to the service based on the incoming request path? • A. AWS Application Load Balancer • B. Amazon CloudFront • C. Amazon Classic Load Balancer • D. Amazon Route 53

• A. AWS Application Load Balancer https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html You can create a listener with rules to forward requests based on the URL path. This is known as path-based routing. If you are running microservices, you can route traffic to multiple back-end services using path-based routing. For example, you can route general requests to one target group and requests to render images to another target group.

A company wants to migrate a highly transactional database to AWS. Requirements state that the database has more than 6 TB of data and will grow exponentially. Which solution should a Solutions Architect recommend? • A. Amazon Aurora • B. Amazon Redshift • C. Amazon DynamoDB • D. Amazon RDS MySQL

• A. Amazon Aurora Answer A is best because of size. The only requirement is the 6TB size that will "grow exponentially." Option B would make sense because of size as Redshift is a datawarehouse option, but note the use of the word "transactional" database which means it is OLTP (see page 160) and not OLAP. Redshift is not a "transactional" database, it is a "batch processing" OLAP database so it doesn't fit the type of database. https://aws.amazon.com/rds/aurora/faqs/ Q: What are the minimum and maximum storage limits of an Amazon Aurora database? The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance. *-----------------------------------------------------* Option D doesn't work because of size limitations: https://aws.amazon.com/about-aws/whats-new/2017/08/amazon-rds-for-sql-server-increases-maxiumum-database-storage-size-to-16-tb/ You can now create Amazon RDS for SQL Server database instances with up to 16TB of storage, up from 4TB. The new storage limit is available when using the Provisioned IOPS and General Purpose (SSD) storage types.

A company's website receives 50,000 requests each second, and the company wants to use multiple applications to analyze the navigation patterns of the users on their website so that the experience can be personalized. What can a Solutions Architect use to collect page clicks for the website and process them sequentially for each user? • A. Amazon Kinesis Stream • B. Amazon SQS standard queue • C. Amazon SQS FIFO queue • D. AWS CloudTrail trail

• A. Amazon Kinesis Stream This is a very tricky question especially because the last thing you read is a requirement for sequential order and you have FIFO as an option. So let's look at ALL of the requirements first. 1.) 50,000 requests each second 2.) Multiple applications 3.) Collect page clicks to personalize user experience 4.) Process clicks sequentially for each user 1.) We know that Kinesis is used for large data streams. 50,000 requets per second is very large and Kinesis would work. 2.) Kinesis can work with multiple applications simultaneously. 3.) You can detect user behavior in a website or application by analyzing the sequence of clicks a user makes, the amount of time the user spends, where they usually begin the navigation, and how it ends. By tracking this user behavior in real time, you can update recommendations, perform advanced A/B testing, push notifications based on session length, and much more. 4.) As the number of users and web and mobile assets you have increases, so does the volume of data. Amazon Kinesis provides you with the capabilities necessary to ingest this data in real time and generate useful statistics immediately so that you can take action. https://aws.amazon.com/kinesis/data-streams/faqs/ *Q: When should I use Amazon Kinesis Data Streams, and when should I use Amazon SQS?* We recommend Amazon Kinesis Data Streams for use cases with requirements that are similar to the following: Routing related records to the same record processor (as in streaming MapReduce). For example, counting and aggregation are simpler when all records for a given key are routed to the same record processor. *Ordering of records.* For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements. Ability for *multiple applications* to consume the same stream concurrently. For example, you have one application that updates a real-time dashboard and another that archives data to Amazon Redshift. You want both applications to consume data from the same stream concurrently and independently. Ability to *consume records in the same order* a few hours later. For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis Data Streams stores data for up to 7 days, you can run the audit application up to 7 days behind the billing application. *We recommend Amazon SQS for use cases with requirements that are similar to the following:* Messaging semantics (such as message-level ack/fail) and visibility timeout. For example, you have a queue of work items and want to track the successful completion of each item independently. Amazon SQS tracks the ack/fail, so the application does not have to maintain a persistent checkpoint/cursor. Amazon SQS will delete acked messages and redeliver failed messages after a configured visibility timeout. Individual message delay. For example, you have a job queue and need to schedule individual jobs with a delay. With Amazon SQS, you can configure individual messages to have a delay of up to 15 minutes. Dynamically increasing concurrency/throughput at read time. For example, you have a work queue and want to add more readers until the backlog is cleared. With Amazon Kinesis Data Streams, you can scale up to a sufficient number of shards (note, however, that you'll need to provision enough shards ahead of time). Leveraging Amazon SQS's ability to scale transparently. For example, you buffer requests and the load changes as a result of occasional load spikes or the natural growth of your business. Because each buffered request can be processed independently, Amazon SQS can scale transparently to handle the load without any provisioning instructions from you.

An application relies on messages being sent and received in order. The volume will never exceed more than 300 transactions each second. Which service should be used? • A. Amazon SQS • B. Amazon SNS • C. Amazon ECS • D. AWS STS

• A. Amazon SQS It is important to note that SQS by default, does not support FIFO (First-In-First-Out = sequential ordering/processing). Page 198 states "the services *does not guarantee* FIFO delivery of messages." However, there is an SQS FIFO https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html. FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated, for example: Ensure that user-entered commands are executed in the right order. Display the correct product price by sending price modifications in the right order. Prevent a student from enrolling in a course before registering for an account. FIFO queues also provide exactly-once processing but have a limited number of transactions per second (TPS): By default, with batching, FIFO queues support up to 3,000 messages per second (TPS), per API action (SendMessageBatch, ReceiveMessage, or DeleteMessageBatch). To request a quota increase, submit a support request . Without batching, FIFO queues support up to 300 messages per second, per API action (SendMessage, ReceiveMessage, or DeleteMessage).

A website experiences unpredictable traffic. During peak traffic times, the database is unable to keep up with the write request. Which AWS service will help decouple the web application from the database? • A. Amazon SQS • B. Amazon EFS • C. Amazon S3 • D. AWS Lambda

• A. Amazon SQS Page 199 SQS = Simple Queue Service Amazon SQS makes it simple and cost effective to *decouple the components* of a cloud application. You can use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be continuously available.

A Solutions Architect is designing a database solution that must support a high rate of random disk reads and writes. It must provide consistent performance, and requires long-term persistence. Which storage solution BEST meets these requirements? • A. An Amazon EBS Provisioned IOPS volume • B. An Amazon EBS General Purpose volume • C. An Amazon EBS Magnetic volume • D. An Amazon EC2 Instance Store

• A. An Amazon EBS Provisioned IOPS volume

A Solution Architect has a two-tier application with a single Amazon EC2 instance web server and Amazon RDS MySQL Multi-AZ DB instances. The Architect is re-architecting the application for high availability by adding instances in a second Availability Zone. Which additional services will improve the availability of the application? *(Choose two)* • A. Auto Scaling group • B. AWS CloudTrail • C. ELB Classic Load Balancer • D. Amazon DynamoDB • E. Amazon ElastiCache

• A. Auto Scaling group • C. ELB Classic Load Balancer

A company requires that the source, destination, and protocol of all IP packets be recorded when traversing a private subnet. What is the MOST secure and reliable method of accomplishing this goal. • A. Create VPC flow logs on the subnet. • B. Enable source destination check on private Amazon EC2 instances. • C. Enable AWS CloudTrail logging and specify an Amazon S3 bucket for storing log files. • D. Create an Amazon CloudWatch log to capture packet information.

• A. Create VPC flow logs on the subnet. https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html A flow log record represents a network flow in your VPC. By default, each record captures a network internet protocol (IP) traffic flow (characterized by a 5-tuple on a per network interface basis) that occurs within an aggregation interval, also referred to as a capture window. By default, the record includes values for the different components of the IP flow, including the *source, destination, and protocol.*

A web application stores all data in an Amazon RDS Aurora database instance. A Solutions Architect wants to provide access to the data for a detailed report for the Marketing team, but is concerned that the additional load on the database will affect the performance of the web application. How can the report be created without affecting the performance of the application? • A. Create a read replica of the database. • B. Provision a new RDS instance as a secondary master. • C. Configure the database to be in multiple regions. • D. Increase the number of provisioned storage IOPS.

• A. Create a read replica of the database.

A manufacturing company captures data from machines running at customer sites. Currently, thousands of machines send data every 5 minutes, and this is expected to grow to hundreds of thousands of machines in the near future. The data is logged with the intent to be analyzed in the future as needed. What is the SIMPLEST method to store this streaming data at scale? • A. Create an Amazon Kinesis Firehouse delivery stream to store the data in Amazon S3. • B. Create an Auto Scaling group of Amazon EC2 servers behind ELBs to write the data into Amazon RDS. • C. Create an Amazon SQS queue, and have the machines write to the queue. • D. Create an Amazon EC2 server farm behind an ELB to store the data in Amazon EBS Cold HDD volumes.

• A. Create an Amazon Kinesis Firehouse delivery stream to store the data in Amazon S3.

A legacy application running in premises requires a Solutions Architect to be able to open a firewall to allow access to several Amazon S3 buckets. The Architect has a VPN connection to AWS in place. How should the Architect meet this requirement? • A. Create an IAM role that allows access from the corporate network to Amazon S3. • B. Configure a proxy on Amazon EC2 and use an Amazon S3 VPC endpoint. • C. Use Amazon API Gateway to do IP whitelisting. • D. Configure IP whitelisting on the customer's gateway

• A. Create an IAM role that allows access from the corporate network to Amazon S3. • C. Use Amazon API Gateway to do IP whitelisting. There is debate on this question - not sure which is the correct answer. It appears that Answer A is the most prevalent across various forums, etc...

A Solutions Architect is about to deploy an API on multiple EC2 instances in an Auto Scaling group behind an ELB. The support team has the following operational requirements: 1) They get an alert when the requests per second go over 50,000; 2) They get an alert when latency goes over 5 seconds; 3)They can validate how many times a day users call the API requesting highly-sensitive data; Which combination of steps does the Architect need to take to satisfy these operational requirements? *(Select two)* • A. Ensure that CloudTrail is enabled. • B. Create a custom CloudWatch metric to monitor the API for data access. • C. Configure CloudWatch alarms for any metrics the support team requires. • D. Ensure that detailed monitoring for the EC2 instances is enabled. • E. Create an application to export and save CloudWatch metrics for longer term trending analysis.

• A. Ensure that CloudTrail is enabled. • C. Configure CloudWatch alarms for any metrics the support team requires. We need CloudTrail to perform the "auditing" which in this case is "validate how many times a day users call the API." You also need to configure CloudWatch which is for "monitoring" which is necessary for the alerts. There is no reason to create a custom metric as the built in metrics already have the capability to meet the requirements.

A Solutions Architect is designing the architecture for a new three-tier web-based e-commerce site that must be available 24/7. Requests are expected to range from 100 to 10,000 each minute. Usage can vary depending on time of day, holidays, and promotions. The design should be able to handle these volumes, with the ability to handle higher volumes if necessary. How should the Architect design the architecture to ensure the web tier is cost-optimized and can handle the expected traffic? *(Select two)* • A. Launch Amazon EC2 instances in an Auto Scaling group behind an ELB. • B. Store all static files in a multi-AZ Amazon Aurora database. • C. Create an CloudFront distribution pointing to static content in Amazon S3. • D. Use Amazon Route 53 to route traffic to the correct region. • E. Use Amazon S3 multi-part uploads to improve upload times.

• A. Launch Amazon EC2 instances in an Auto Scaling group behind an ELB. • C. Create an CloudFront distribution pointing to static content in Amazon S3. Some key factors in this question are the "handle higher volumes" requirement and three-tier site. The other key factor is "cost-optimized" requirement. Answer A = Handling higher volumes requirements results in the need for the "Auto Scaling group" and the 3 tier requirement mandates the need for an ELB (Elastic Load Balancer - remember there are different types of ELBs) which can distribute the load across the 3 tiers. Answer C = The cost-optimized requirement leads to the CloudFront distribution pointing to static content. Static content means we are only paying when it is accessed instead of paying all of the time which reduces costs and meets the requirement.

A Solutions Architect needs to build a resilient data warehouse using Amazon Redshift. The Architect needs to rebuild the Redshift cluster in another region. Which approach can the Architect take to address this requirement? • A. Modify the Redshift cluster and configure cross-region snapshots to the other region. • B. Modify the Redshift cluster to take snapshots of the Amazon EBS volumes each day, sharing those snapshots with the other region. • C. Modify the Redshift cluster and configure the backup and specify the Amazon S3 bucket in the other region. • D. Modify the Redshift cluster to use AWS Snowball in export mode with data delivered to the other region.

• A. Modify the Redshift cluster and configure cross-region snapshots to the other region. You can store Redshift cluster snapshots in Amazon S3, but this would be in the same Region. The key word here is "another region." This creates an issue where the automatic or manual snapshots must be created and shared across regions which negates option C. The other options are not even close.

An interactive, dynamic website runs on Amazon EC2 instances in a single subnet behind an ELB Classic Load Balancer. Which design changes will make the site more highly available? • A. Move some Amazon EC2 instances to a subnet in a different AZ. • B. Move the website to Amazon S3. • C. Change the ELB to an Application Load Balancer. • D. Move some Amazon EC2 instances to a subnet in the same Availability Zone.

• A. Move some Amazon EC2 instances to a subnet in a different AZ.

Developers are creating a new online transaction processing (OLTP) application for a small database that is very read-write intensive. A single table in the database is updated continuously throughout the day, and the developers want to ensure that the database performance is consistent. Which Amazon EBS storage option will achieve the MOST consistent performance to help maintain application performance? • A. Provisioned IOPS SSD • B. General Purpose SSD • C. Cold HDD • D. Throughput Optimized HDD

• A. Provisioned IOPS SSD There are only 5 types of EBS volumes: 1.) Magnetic Volumes (HDD) = average 100 IOPS 2.) General-Purpose SSD = up 10,000 IOPS 3.) Provisioned IOPS SSD up to 20,000 IOPS 4.) Throughput-Optimzed HDD = low-cost HDD designed for frequent access, throughput intensive workloads such as big data, data warehouses, and log processing. Max IOPS of 500 MB/s. 5.) Cold HDD volumes = designed for less frequently accessed workloads (few scans per day) The reference to OLTP means the database is focused on high transactions or lots of IOPS and this is confirmed when it says "read-write intensive." The MOST consistent performance (notice there is no reference to cost) relative to IOPS is the Provisioned IOPS-SSD which (page 67) provides "predictable, high performance" for "critical business applications that require sustained IOPS performance" and "large database workloads." It is also the most expensive.

A Solutions Architect is designing a solution to store a large quantity of event data in Amazon S3. The Architect anticipates that the workload will consistently exceed 100 requests each second. What should the Architect do in Amazon S3 to optimize performance? Outdated • A. Randomize a key name prefix. • B. Store the event data in separate buckets. • C. Randomize the key name suffix. • D. Use Amazon S3 Transfer Acceleration.

• A. Randomize a key name prefix.

A company hosts a two-tier application that consists of a publicly accessible web server that communicates with a private database. Only HTTPS port 443 traffic to the web server must be allowed from the Internet. Which of the following options will achieve these requirements? *(Choose two)* • A. Security group rule that allows inbound Internet traffic for port 443. • B. Security group rule that denies all inbound Internet traffic except port 443. • C. Network ACL rule that allows port 443 inbound and all ports outbound for Internet traffic. • D. Security group rule that allows Internet traffic for port 443 in both inbound and outbound. • E. Network ACL rule that allows port 443 for both inbound and outbound for all Internet traffic.

• A. Security group rule that allows inbound Internet traffic for port 443. • C. Network ACL rule that allows port 443 inbound and all ports outbound for Internet traffic. Primer: *Security groups are stateful* (page 97). This means that responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules and vice versa. This is an important difference between security groups and ACLs. Security groups *support allow rules only*. A network ACL (Access Control List) is another layer of security that acts as a *stateless firewall* on a subnet level (page 97) meaning return traffic must be explicitly allowed by rules. Network ACL *supports allow rules and deny rules." Option E doesn't work because you might have multiple connections and it would create a bottleneck if all traffic went through port 443. As a result, Answer C is the better option allowing for responses on multiple ports. Option B doesn't work because security groups don't support deny rules - so you can't do this in AWS. Option D doesn't make sense because security groups are already "stateful" so the connection will be established and doesn't need an outbound rule.

A Solutions Architect is designing a mobile application that will capture receipt images to track expenses. The Architect wants to store the images on Amazon S3. However, uploading images through the web server will create too much traffic. What is the MOST efficient method to store images from a mobile application on Amazon S3? • A. Upload directly to S3 using a pre-signed URL. • B. Upload to a second bucket, and have a Lambda event copy the image to the primary bucket. • C. Upload to a separate Auto Scaling group of servers behind an ELB Classic Load Balancer, and have them write to the Amazon S3 bucket. • D. Expand the web server fleet with Spot Instances to provide the resources to handle the images.

• A. Upload directly to S3 using a pre-signed URL. Key word is "most efficient." While Option C is a good way to solve the problem, it is not as efficient as Answer A because pre-signed URLs can be automatic and they bypass web-server by uploading directly to S3.

A Solutions Architect is designing a VPC. Instances in a private subnet must be able to establish IPv6 traffic to the Internet. The design must scale automatically and not incur any additional cost. This can be accomplished with: • A. an egress-only internet gateway • B. a NAT gateway • C. a custom NAT instance • D. a VPC endpoint

• A. an egress-only internet gateway This is kind of a tricky question. The key to this entire question is "IPv6". The fact is that "NAT gateways are *NOT supported* for IPv6 traffic—use an egress-only internet gateway instead. NAT is for IPv4 - remember that IPv4 has a limited number of addresses and they were all used up (which is why we are supposed to be using IPv6). However, most people in the USA (not the world - just the USA) prefere IPv4 and so they often will utilize a NAT because they like the normal (familiar) IP addressing. Elminating the NAT eliminates Options B and C. Option D - VPC Endpoints do not leave the Amazon network so this would not allow traffic out to the interent. For more information, see Egress-Only Internet Gateways." https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

A Solutions Architect is designing a photo application on AWS. Every time a user uploads a photo to Amazon S3, the Architect must insert a new item to a DynamoDB table. Which AWS-managed service is the BEST fit to insert the item? • A. Lambda@Edge • B. AWS Lambda • C. Amazon API Gateway • D. Amazon EC2 instances

• B. AWS Lambda

A Solutions Architect has a multi-layer application running in Amazon VPC. The application has an ELB Classic Load Balancer as the front end in a public subnet, and an Amazon EC2-based reverse proxy that performs content-based routing to two backend Amazon EC2 instances hosted in a private subnet. The Architect sees tremendous traffic growth and is concerned that the reverse proxy and current backend set up will be insufficient. Which actions should the Architect take to achieve a cost-effective solution that ensures the application automatically scales to meet traffic demand? (Select two.) • A. Replace the Amazon EC2 reverse proxy with an ELB internal Classic Load Balancer. • B. Add Auto Scaling to the Amazon EC2 backend fleet. • C. Add Auto Scaling to the Amazon EC2 reverse proxy layer. • D. Use t2 burstable instance types for the backend fleet. • E. Replace both the frontend and reverse proxy layers with an ELB Application Load Balancer.

• B. Add Auto Scaling to the Amazon EC2 backend fleet. • E. Replace both the frontend and reverse proxy layers with an ELB Application Load Balancer.

A bank is writing new software that is heavily dependent upon the database transactions for write consistency. The application will also occasionally generate reports on data in the database, and will do joins across multiple tables. The database must automatically scale as the amount of data grows. Which AWS service should be used to run the database? • A. Amazon S3 • B. Amazon Aurora • C. Amazon DynamoDB • D. Amazon Redshift

• B. Amazon Aurora Key factor is "heavily dependent upon the database transactions." This means OLTP = transaction based DB. Thus, Aurora is the answer instead of DynamoDB. Redshift is not a good solution because it is used for warehousing.

A Solutions Architect is building a new feature using a Lambda to create metadata when a user uploads a picture to Amazon S3. All metadata must be indexed. Which AWS service should the Architect use to store this metadata? • A. Amazon S3 • B. Amazon DynamoDB • C. Amazon Kinesis • D. Amazon EFC

• B. Amazon DynamoDB

A company is launching an application that it expects to be very popular. The company needs a database that can scale with the rest of the application. The schema will change frequently. The application cannot afford any downtime for database changes. Which AWS service allows the company to achieve these objectives? • A. Amazon Redshift • B. Amazon DynamoDB • C. Amazon RDS MySQL • D. Amazon Aurora

• B. Amazon DynamoDB The key words in the question are "scale" and "schema will change frequently" and no "downtime for changes." Page 177 Amazon DynamoDB is a full managed NoSQL database service that provides fast and low-latency performance that *scales* with ease. [...] In addition to providing high-performance levels, Amazon DynamoDB also provides *automatic high-availability and durability* protections by replicating data across multiple Availability Zones within an AWS Region.

A Solutions Architect is developing a solution for sharing files in an organization. The solution must allow multiple users to access the storage service at once from different virtual machines and scale automatically. It must also support file-level locking. Which storage service meets the requirements of this use case? • A. Amazon S3 • B. Amazon EFS • C. Amazon EBS • D. Cached Volumes

• B. Amazon EFS 1.) Multiple Users at once 2.) Scale automatically 3.) Support file-level locking https://aws.amazon.com/efs/faq/ 1.) Q. How many Amazon EC2 instances can connect to a file system? Amazon EFS supports one to thousands of Amazon EC2 instances connecting to a file system concurrently. 2.) Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte-scale and allowing massively parallel access from Amazon EC2 instances to your data. Amazon EFS's distributed design avoids the bottlenecks and constraints inherent to traditional file servers. 3.) Q. What type of locking does Amazon EFS support? Locking in Amazon EFS follows the NFSv4.1 protocol for advisory locking, and enables your applications to use both whole file and byte range locks.

A user is testing a new service that receives location updates from 3,600 rental cars every hour. Which service will collect data and automatically scale to accommodate production workload? • A. Amazon EC2 • B. Amazon Kinesis Firehose • C. Amazon EBS • D. Amazon API Gateway

• B. Amazon Kinesis Firehose

A customer has a production application that frequently overwrites and deletes data, the application requires the most up-to-date version of the data every time it is requested. Which storage should a Solutions Architect recommend to bet accommodate this use case? • A. Amazon S3 • B. Amazon RDS • C. Amazon RedShift • D. AWS Storage Gateway

• B. Amazon RDS

A Solutions Architect is architecting a workload that requires a performant object-based storage system that must be shared with multiple Amazon EC2 instances. Which AWS service meets this requirement? • A. Amazon EFS • B. Amazon S3 • C. Amazon EBS • D. Amazon ElastiCache

• B. Amazon S3 Page 23: "Amazon S3 is easy-to-use *object storage* with a simple web interface that you can use to store and retrieve any amount of data from anywhere on the web." Page 25: If you need traditional block or file stroage in addition to Amazon S3 storage, AWS provides options. The Amazon EBS service provides *block level storage* for Amazon Elastic Computer Cloud (EC2) instances. Amazon Elastic File System (AWS EFS) provides *network-attached shared file storage* (NAS storage* using the NFS v4 protocol. https://aws.amazon.com/efs/faq/ *Q. When should I use Amazon EFS vs. Amazon S3 vs. Amazon Elastic Block Store (EBS)?* Amazon Web Services (AWS) offers cloud storage services to support a wide range of storage workloads. *Amazon EFS* is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. *Amazon EBS* is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. *Amazon S3* is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

A company is using an Amazon S3 bucket located in us-west-2 to serve videos to their customers. Their customers are located all around the world and the videos are requested a lot during peak hours. Customers in Europe complain about experiencing slow downloaded speeds, and during peak hours, customers in all locations report experiencing HTTP 500 errors. What can a Solutions Architect do to address these issues? • A. Place an elastic load balancer in front of the Amazon S3 bucket to distribute the load during peak hours. • B. Cache the web content with Amazon CloudFront and use all Edge locations for content delivery. • C. Replicate the bucket in eu-west-1 and use an Amazon Route 53 failover routing policy to determine which bucket it should serve the request to. • D. Use an Amazon Route 53 weighted routing policy for the CloudFront domain name to distribute the GET request between CloudFront and the Amazon S3 bucket directly.

• B. Cache the web content with Amazon CloudFront and use all Edge locations for content delivery.

A Solutions Architect is building an application on AWS that will require 20,000 IOPS on a particular volume to support a media event. Once the event ends, the IOPS need is no longer required. The marketing team asks the Architect to build the platform to optimize storage without incurring downtime. How should the Architect design the platform to meet these requirements? • A. Change the Amazon EC2 instant types. • B. Change the EBS volume type to Provisioned IOPS. • C. Stop the Amazon EC2 instance and provision IOPS for the EBS volume. • D. Enable an API Gateway to change the endpoints for the Amazon EC2 instances.

• B. Change the EBS volume type to Provisioned IOPS.

A Solution Architect is designing a three-tier web application. The Architect wants to restrict access to the database tier to accept traffic from the application servers only. However, these application servers are in an Auto Scaling group and may vary in quantity. How should the Architect configure the database servers to meet the requirements? • A. Configure the database security group to allow database traffic from the application server IP addresses. • B. Configure the database security group to allow database traffic from the application server security group. • C. Configure the database subnet network ACL to deny all inbound non-database traffic from the application-tier subnet. • D. Configure the database subnet network ACL to allow inbound database traffic from the application-tier subnet

• B. Configure the database security group to allow database traffic from the application server security group. The tricky part to this question is actually not that tricky. The word "however" and the use of varying quantity is the trick, but it actually doesn't have an affect on best practices because it should always be assumed that flexibility needs to be built in. So the varying quantity negates option A because each application server could change IP addresses when they scale up or down (new instance = new IP). Configuring subnets to solve the problem as recommended in Option C and D is just way too much work and doesn't make sense as that is not the purpose of a subnet. As a result, the Answer is B because security groups can provide more granular permissions and not just permissions based on IP like ACLs forcing the horrible idea of implementing subnets to perform access control. Security Groups are by far the best option and provide for the most flexibility in this instance.

A Solutions Architect needs to allow developers to have SSH connectivity to web servers. The requirements are as follows: ✑ Limit access to users origination from the corporate network. ✑ Web servers cannot have SSH access directly from the Internet. ✑ Web servers reside in a private subnet. Which combination of steps must the Architect complete to meet these requirements? *(Choose two)* • A. Create a bastion host that authenticates users against the corporate directory. • B. Create a bastion host with security group rules that only allow traffic from the corporate network. • C. Attach an IAM role to the bastion host with relevant permissions. • D. Configure the web servers' security group to allow SSH traffic from a bastion host. • E. Deny all SSH traffic from the corporate network in the inbound network ACL.

• B. Create a bastion host with security group rules that only allow traffic from the corporate network. • D. Configure the web servers' security group to allow SSH traffic from a bastion host.

An application stack includes an Elastic Load Balancer in a public subnet, a fleet of Amazon EC2 instances in an Auto Scaling group, and an Amazon RDS MySQL cluster. Users connect to the application from the Internet. The application servers and database must be secure. How should a Solutions Architect perform this task? • A. Create a private subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster. • B. Create a private subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster. • C. Create a public subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster. • D. Create a public subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster.

• B. Create a private subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.

A web application experiences high compute costs due to serving a high amount of static web content. How should the web server architecture be designed to be the MOST cost-efficient? • A. Create an Auto Scaling group to scale out based on average CPU usage. • B. Create an Amazon CloudFront distribution to pull static content from an Amazon S3 bucket. • C. Leverage Reserved Instances to add additional capacity at a significantly lower price. • D. Create a multi-region deployment using an Amazon Route 53 geolocation routing policy.

• B. Create an Amazon CloudFront distribution to pull static content from an Amazon S3 bucket.

A call center application consists of a three-tier application using Auto Scaling groups to automatically scale resources as needed. Users report that every morning at 9:00 AM the system becomes very slow for about 15 minutes. A Solution Architect determines that a large percentage of the call center staff starts work at 9:00 AM, so Auto Scaling does not have enough time to scale out to meet demand. How can the Architect fix the problem? • A. Change the Auto Scaling group's scale out event to scale based on network utilization. • B. Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30 AM every morning. • C. Use Reserved Instances to ensure the system has reserved the right amount of capacity for the scale-up events. • D. Permanently keep a steady state of instances that is needed at 9:00 AM to guarantee available resources, but leverage Spot Instances.

• B. Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30 AM every morning. The key is that the problem only happens at 9:00 AM and that the Auto Scaling is not fast enough to meet the demand. Answer B = By creating a scheduled action to scale up prior to the 9 AM rush, the system resources can be assigned and ready to roll for that 9 AM rush and meet demand. Option A = This option doesn't make any sense. The problem is not why it is scaling out, but that it doesn't have enough time to scale out. Changing why it scales out is irrelevant because it doesn't solve the time issue. Option C = This is a viable option, but not the best option. Reserved instances would always be on and defeat the purpose of the Auto-Scaling group which allows for flexibility. Option D = This option is a viable option, but not the best option. Keeping permanent instances is expensive especially when the load problem only occurs at 9:00 AM and likely (implied) only lasts an hour. So why pay for 23 hours of non-used resources. Spot instances doesn't make sense because they are not guaranteed resources so it is possible that the spot instances would not be available and the problem at 9 AM would not be addressed as you would not have any additional resources.

A Solutions Architect is designing a Lambda function that calls an API to list all running Amazon RDS instances. How should the request be authorized? • A. Create an IAM access and secret key, and store it in the Lambda function. • B. Create an IAM role to the Lambda function with permissions to list all Amazon RDS instances. • C. Create an IAM role to Amazon RDS with permissions to list all Amazon RDS instances. • D. Create an IAM access and secret key, and store it in an encrypted RDS database.

• B. Create an IAM role to the Lambda function with permissions to list all Amazon RDS instances.

A Lambda function must execute a query against an Amazon RDS database in a private subnet. Which steps are required to allow the Lambda function to access the Amazon RDS database? *(Select two)* • A. Create a VPC Endpoint for Amazon RDS. • B. Create the Lambda function within the Amazon RDS VPC. • C. Change the ingress rules of Lambda security group, allowing the Amazon RDS security group. • D. Change the ingress rules of the Amazon RDS security group, allowing the Lambda security group. • E. Add an Internet Gateway (IGW) to the VPC, route the private subnet to the IGW.

• B. Create the Lambda function within the Amazon RDS VPC. • D. Change the ingress rules of the Amazon RDS security group, allowing the Lambda security group. The key points are that there is a "private subnet" and allowing access to "Amazon RDS." Answer B = By creating the Lambda function in the VPC, it can access the private subnet (unless there are multiple private subnets in the VPC - but the question doesn't specify multiple subnets in the VPC). Answer D = Option C doesn't work because this would allow the RDS to access Lambda. You need the opposite. So Answer D is correct because it allows Lambda to access RDS. Remember that Lambda needs to have access to the port 3306 (most likely this port # based on default port for MySQL) and security groups control ingress (traffic coming in) and egress (traffic going out).

A Solutions Architect is designing a new application that needs to access data in a different AWS account located within the same region. The data must not be accessed over the Internet. Which solution will meet these requirements with the LOWEST cost? • A. Add rules to the security groups in each account. • B. Establish a VPC Peering connection between accounts. • C. Configure Direct Connect in each account. • D. Add a NAT Gateway to the data account.

• B. Establish a VPC Peering connection between accounts.

A Solutions Architect plans to migrate NAT instances to NAT gateway. The Architect has NAT instances with scripts to manage high availability. What is the MOST efficient method to achieve similar high availability with NAT gateway? • A. Remove source/destination check on NAT instances. • B. Launch a NAT gateway in each Availability Zone. • C. Use a mix of NAT instances and NAT gateway. • D. Add an ELB Application Load Balancer in front of NAT gateway.

• B. Launch a NAT gateway in each Availability Zone.

A Solutions Architect is designing a web application. The web and application tiers need to access the Internet, but they cannot be accessed from the Internet. Which of the following steps is required? • A. Attach an Elastic IP address to each Amazon EC2 instance and add a route from the private subnet to the public subnet. • B. Launch a NAT gateway in the public subnet and add a route to it from the private subnet. • C. Launch Amazon EC2 instances in the public subnet and change the security group to allow outbound traffic on port 80. • D. Launch a NAT gateway in the private subnet and deploy a NAT instance in the private subnet.

• B. Launch a NAT gateway in the public subnet and add a route to it from the private subnet.

An Internet-facing multi-tier web application must be highly available. An ELB Classic Load Balancer is deployed in front of the web tier. Amazon EC2 instances at the web application tier are deployed evenly across two Availability Zones. The database is deployed using RDS Multi-AZ. A NAT instance is launched for Amazon EC2 instances and database resources to access the Internet. These instances are not assigned with public IP addresses. Which component poses a potential single point of failure in this architecture? • A. Amazon EC2 • B. NAT instance • C. ELB Classic Load Balancer • D. Amazon RDS

• B. NAT instance Answer B = NAT = Network Address Translation. NAT is responsible for converting a private IP (internal subnet such as a 10/8 or 192.168/16 address into a public IP address). The key to the question is that it says "these instances are not assigned with public IP addresses." This means that the NAT is REQUIRED to change their internal private IP address into a public IP address. So it becomes a single point of failure because if it goes down, there is no way to access the components from outside (via a public IP Address). Option A = Being that they are deployed across 2 availability zones, this makes them not a single point of failure. If there was only 1 availability zone, that would make it a single point of failure. Option C = Load Balancers by design "highly available." Page 113 states "is highly available within a region itself as a service." To be highly available in a region means it already spans multiple AZs removing failure of a single AZ as a potential threat. Option D = The question says it is Multi-AZ which means it is highly available or uses multiple AZs meaning there is no single point of failure in a single AZ.

A company hosts a popular web application. The web application connects to a database running in a private VPC subnet. The web servers must be accessible only to customers on an SSL connection. The RDS MySQL database server must be accessible only from the web servers. How should the Architect design a solution to meet the requirements without impacting running applications? • A. Create a network ACL on the web server's subnet, and allow HTTPS inbound and MySQL outbound. Place both database and web servers on the same subnet. • B. Open an HTTPS port on the security group for web servers and set the source to 0.0.0.0/0. Open the MySQL port on the database security group and attach it to the MySQL instance. Set the source to Web Server Security Group. • C. Create a network ACL on the web server's subnet, and allow HTTPS inbound, and specify the source as 0.0.0.0/0. Create a network ACL on a database subnet, allow MySQL port inbound for web servers, and deny all outbound traffic. • D. Open the MySQL port on the security group for web servers and set the source to 0.0.0.0/0. Open the HTTPS port on the database security group and attach it to the MySQL instance. Set the source to Web Server Security Group.

• B. Open an HTTPS port on the security group for web servers and set the source to 0.0.0.0/0. Open the MySQL port on the database security group and attach it to the MySQL instance. Set the source to Web Server Security Group. Opening to 0.0.0.0/0 is open to "The World" (i.e. anybody with an internet connection). SSL/TLS is default encryption used for HTTPS so it makes sense to open HTTPS (port 443) which defaults to SSL/TLS and open that to the iternet. As for the database, you have to allow it communicate to the Web Server so you would add the source (where traffic is allowed to come from - the web server) to the Security Group.

A customer owns a simple API for their website that receives about 1,000 requests each day and has an average response time of 50 ms. It is currently hosted on one c4.large instance. Which changes to the architecture will provide high availability at the LOWEST cost? • A. Create an Auto Scaling group with a minimum of one instance and a maximum of two instances, then use an Application Load Balancer to balance the traffic. • B. Recreate the API using Amazon API Gateway and use AWS Lambda as the service backend. • C. Create an Auto Scaling group with a maximum of two instances, then use an Application Load Balancer to balance the traffic. • D. Recreate the API using Amazon API Gateway and integrate the new API with the existing backend service.

• B. Recreate the API using Amazon API Gateway and use AWS Lambda as the service backend. The requirements are two-fold: 1.) high availability 2) low cost. 1.) Page 16 = Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. Page 10 = AWS Lambda runs your back-end code on its own AWS compute fleet of Amazon EC2 instances across multiple AZs in a region, which provides the *high availability*, security, performance and scalability of the AWS infrastructure. 2.) Page 10 = AWS Lambda provides a "fine-grained pricing structure." Lambda is one of the cheapest services because it is only using a fraction of the EC2 instance and is charged by the millisecond. This means that it is a fraction of the cost of the EC2 instance because you only pay for what you use relative to the compute times to run the code instead of paying for dedicated compute from a full EC2 instance. Lambda is a fraction of pennies for compute time.

A company is using AWS Key Management Service (AWS KMS) to secure their Amazon RDS databases. An auditor has recommended that the company log all use of their AWS KMS keys. What is the SIMPLEST solution? • A. Associate AWS KMS metrics with Amazon CloudWatch. • B. Use AWS CloudTrail to log AWS KMS key usage. • C. Deploy a monitoring agent on the RDS instances. • D. Poll AWS KMS periodically with a scheduled job.

• B. Use AWS CloudTrail to log AWS KMS key usage. The RDS reference is not important to the question (superfluous information). The key word here is actually "auditor." CloudTrail is always the tool for Auditing. CloudWatch is used for Monitoring. It is not about monitoring in real time, it is about "logging" which coincides with "auditing."

As part of securing an API layer built on Amazon API gateway, a Solutions Architect has to authorize users who are currently authenticated by an existing identity provider. The users must be denied access for a period of one hour after three unsuccessful attempts. How can the Solutions Architect meet these requirements? • A. Use AWS IAM authorization and add least-privileged permissions to each respective IAM role. • B. Use an API Gateway custom authorizer to invoke an AWS Lambda function to validate each user's identity. • C. Use Amazon Cognito user pools to provide built-in user management. • D. Use Amazon Cognito user pools to integrate with external identity providers.

• B. Use an API Gateway custom authorizer to invoke an AWS Lambda function to validate each user's identity. • D. Use Amazon Cognito user pools to integrate with external identity providers. Cert Library has the Answer as B - but that seems like way too much work o vlidate each user individually. Amazon Cognito appears at first glance to be the correct line of reasoning because of the "existing identify provider" reference. Cognito works with external identity providers like Facebook and Google for authentication with mobile apps. So this seems to limit a person to either Option C or D. However, the wrench comes in when they say "users must be denied access for a period of one hour..." Cognito does not allow for Lock-Out policies as part of its configuration. 2nd However, it does appear that there is a built-in lock-out policy for Cognito (by default), but it doesn't appear to be defined in any of the documentation so you couldn't define the "1 hour" lockout period. This may be something that Cognito has inherently by design, but I can find no refefrence to how it can be set or adjusted/configured. *Not sure what is the correct answer on this one*. Personally, I would go with Option D because of the inherent lock-out feature in Cognito, even though it is not defined, it may still be doing the lock-out behind the scenes.

Which service should an organization use if it requires an easily managed and scalable platform to host its web application running on Nginx? • A. AWS Lambda • B. Auto Scaling • C. AWS Elastic Beanstalk • D. Elastic Load Balancing

• C. AWS Elastic Beanstalk Page 290 Developers can simply upload their application code, and the service automatically handles all of the details, such as resource provisioning, load balancing, Auto scaling, and monitoring.

An e-commerce application is hosted in AWS. The last time a new product was launched, the application experienced a performance issue due to an enormous spike in traffic. Management decided that capacity must be doubled the week after the product is launched. Which is the MOST efficient way for management to ensure that capacity requirements are met? • A. Add a Step Scaling policy. • B. Add a Dynamic Scaling policy. • C. Add a Scheduled Scaling action. • D. Add Amazon EC2 Spot Instances.

• C. Add a Scheduled Scaling action. The key to this question is the term: "efficiency." Answer C = The company will KNOW when a new product is being launched. This means they can schedule sources prior to launch and double capacity accordingly. This allows them to reserve instances and get a better price and meet known demand. Option A and B = Step Scaling might not meet demand fast enough and the same goes for Dynamic Scaling. Option D = Spot instances are good when you have resource requirements that have no time table or are NOT on-demand. Spot instances can be unavailable and leave the company unable to meet demand.

An organization is currently hosting a large amount of frequently accessed data consisting of key-value pairs and semi-structured documents in their data center. They are planning to move this data to AWS. Which of one of the following services MOST effectively meets their needs? • A. Amazon Redshift • B. Amazon RDS • C. Amazon DynamoDB • D. Amazon Aurora

• C. Amazon DynamoDB Key points in this question are 1.) large amount of data 2.) frequently accessed 3.) key-value pairs 4) structured docs. Page 177 1.) Scales with ease 2.) Has low-latency performance 3.) Data is stored in Amazon DynamoDB in key/value pairs 4.) Document Data Types = List and Map

A company has a legacy application using a proprietary file system and plans to migrate the application to AWS. Which storage service should the company use? • A. Amazon DynamoDB • B. Amazon S3 • C. Amazon EBS • D. Amazon EFS

• C. Amazon EBS EFS does not allow for proprietary file systems so the answer is C which would allow for any type of file system as it is just block storage (blank slate).

An application requires block storage for file updates. The data is 500 GB and must continuously sustain 100 MB/s of aggregate read/write operations. Which storage option is appropriate for this application? • A. Amazon S3 • B. Amazon EFS • C. Amazon EBS • D. Amazon Glacier

• C. Amazon EBS The key word is "block storage" which should point to EBS (Elastic Block Storage) and the second part that confirms this is the "read/write operations" reference which is also called IOPS (Input/Output = Read/Write). Page 65 Amazon EBS provides persistent *block-level* storage volumes for use with Amazon EC2 instances. Page 67 General Purpose SSD = Maximum Throughput (IOPs) = 160 MB So the best case scenario would be for a General Purpose SSD using EBS.

A social networking portal experiences latency and throughput issues due to an increased number of users. Application servers use very large datasets from an Amazon RDS database, which creates a performance bottleneck on the database. Which AWS service should be used to improve performance? • A. Auto Scaling • B. Amazon SQS • C. Amazon ElastiCache • D. ELB Application Load Balancer

• C. Amazon ElastiCache The two options to consider are B and C. Option B is not good because it is not talking about the number of queries being excessive - if it was a quantity of queries issue it would be SQS as the best answer. Instead, it is the "very large datasets" that is creating the bottleneck. Thus, caching more data would mean a reduction on the DB (database) server when providing response to the queries.

A Solutions Architect must select the storage type for a big data application that requires very high sequential I/O. The data must persist if the instance is stopped. Which of the following storage types will provide the best fit at the LOWEST cost for the application? • A. An Amazon EC2 instance store local SSD volume. • B. An Amazon EBS provisioned IOPS SSD volume. • C. An Amazon EBS throughput optimized HDD volume. • D. An Amazon EBS general purpose SSD volume.

• C. An Amazon EBS throughput optimized HDD volume. Three Requirements: 1.) Very high sequential I/O 2.) Data must persist if instance stopped 3.) Lowest cost The very high requirement immediatley makes you think of a Provisioned IOPS or Option B. However, Provisioned IOPS is very expensive and not the "lowest cost" option. All of the SSD options will be more expensive than the HDD option because HDD Is older technology (spinning disks). The only option that fits the lowest cost option is the Throughput Optimized HDD volume which is designed for "low-cost HDD volumes designed for frequent-access, throughput-intensive workloads such as big data, data warehouse, and log processing [...] Maximum throughput of 500 MB/s. *These volumes are significantly less expensive than general-purpose SSD volumes.*" (page 68)

A Solutions Architect is developing software on AWS that requires access to multiple AWS services, including an Amazon EC2 instance. This is a security sensitive application, and AWS credentials such as Access Key ID and Secret Access Key need to be protected and cannot be exposed anywhere in the system. What security measure would satisfy these requirements? • A. Store the AWS Access Key ID/Secret Access Key combination in software comments. • B. Assign an IAM user to the Amazon EC2 instance. • C. Assign an IAM role to the Amazon EC2 instance. • D. Enable multi-factor authentication for the AWS root account.

• C. Assign an IAM role to the Amazon EC2 instance. Option A would not protect the credentials as storing it in the software would allow an attacker to view the credentials. Option B does not help as assigning a user has nothing to do with security credentials. Option D only addresses the root account and does not secure credentials. Answer C = Assigning a role means the credentials stay protected and that access control can be provided.

A Solution Architect is designing an application that uses Amazon EBS volumes. The volumes must be backed up to a different region. How should the Architect meet this requirement? • A. Create EBS snapshots directly from one region to another. • B. Move the data to an Amazon S3 bucket and enable cross-region replication. • C. Create EBS snapshots and then copy them to the desired region. • D. Use a script to copy data from the current Amazon EBS volume to the destination Amazon EBS volume.

• C. Create EBS snapshots and then copy them to the desired region. Key point here is "backed up to a different region." Page 69 Snapshots are constrained to the region in which they are created, meaning you can use them to create new volumes only in the same region. If you need to restore a snapshot in a different region, you can copy a snapshot to another region. It is important to know that while snapshots are stored using Amazon S3 technology, they are stored *in AWS controlled storage* and *NOT* in *YOUR* S3 bucket. This means you CANNOT manipulate them like other S3 objects.

A company is evaluating Amazon S3 as a data storage solution for their daily analyst reports. The company has implemented stringent requirements concerning the security of the data at rest. Specifically, the CISO asked for the use of envelope encryption with separate permissions for the use of an envelope key, automated rotation of the encryption keys, and visibility into when an encryption key was used and by whom. Which steps should a Solutions Architect take to satisfy the security requirements requested by the CISO? • A. Create an Amazon S3 bucket to store the reports and use Server-Side Encryption with Customer-Provided Keys (SSE-C). • B. Create an Amazon S3 bucket to store the reports and use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3). • C. Create an Amazon S3 bucket to store the reports and use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS). • D. Create an Amazon S3 bucket to store the reports and use Amazon s3 versioning with Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).

• C. Create an Amazon S3 bucket to store the reports and use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS). The key word here is "envelope." Page 275 sates "AWS KMS uses envelope encryption to protect data. AWS KMS (Key Management Service) creates a data key, encrypts it under a CMK (Customer Managed Key), and returns plaintext and encrypted version of the data key to you. Also, KMS will provide the requested "automated rotation of the encryption keys."

A Solutions Architect is designing a web application that is running on an Amazon EC2 instance. The application stores data in DynamoDB. The Architect needs to secure access to the DynamoDB table. What combination of steps does AWS recommend to achieve secure authorization? *(Select two)* • A. Store an access key on the Amazon EC2 instance with rights to the Dynamo DB table. • B. Attach an IAM user to the Amazon EC2 instance. • C. Create an IAM role with permissions to write to the DynamoDB table. • D. Attach an IAM role to the Amazon EC2 instance. • E. Attach an IAM policy to the Amazon EC2 instance.

• C. Create an IAM role with permissions to write to the DynamoDB table. • D. Attach an IAM role to the Amazon EC2 instance.

A mobile application serves scientific articles from individual files in an Amazon S3 bucket. Articles older than 30 days are rarely read. Articles older than 60 days no longer need to be available through the application, but the application owner would like to keep them for historical purposes. Which cost-effective solution BEST meets these requirements? • A. Create a Lambda function to move files older than 30 days to Amazon EBS and move files older than 60 days to Amazon Glacier. • B. Create a Lambda function to move files older than 30 days to Amazon Glacier and move files older than 60 days to Amazon EBS. • C. Create lifecycle rules to move files older than 30 days to Amazon S3 Standard Infrequent Access and move files older than 60 days to Amazon Glacier. • D. Create lifecycle rules to move files older than 30 days to Amazon Glacier and move files older than 60 days to Amazon S3 Standard Infrequent Access.

• C. Create lifecycle rules to move files older than 30 days to Amazon S3 Standard Infrequent Access and move files older than 60 days to Amazon Glacier. The keys to this question are around the terms "rarely read" and "no longer needed" and "would like to keep for historical purposes." Amazon S3 Standard - Infrequent Access Page 31 Designed for long-lived, less frequently accessed data (minimum duration of 30 days to store data) [...] best suited for *infrequently accessed data* that is stored for longer than 30 days. Amazon Glacier - Page 31 Extremely low-cost cloud storage for data that does not require real-time access, such as archives or long-term backups.

A Solutions Architect is designing a log-processing solution that requires storage that supports up to 500 MB/s throughput. The data is sequentially accessed by an Amazon EC2 instance. Which Amazon storage type satisfies these requirements? • A. EBS Provisioned IOPS SSD (io1) • B. EBS General Purpose SSD (gp2) • C. EBS Throughput Optimized HDD (st1) • D. EBS Cold HDD (sc1)

• C. EBS Throughput Optimized HDD (st1) The key in the question is "log-procesing" and up to "500 MB/s throughput." Throughput-Optimzed HDD = low-cost HDD designed for frequent access, throughput intensive workloads such as big data, data warehouses, and log processing. Max IOPS of 500 MB/s.

A Solutions Architect is designing a microservices-based application using Amazon ECS. The application includes a WebSocket component, and the traffic needs to be distributed between microservices based on the URL. Which service should the Architect choose to distribute the workload? • A. ELB Classic Load Balancer • B. Amazon Route 53 DNS • C. ELB Application Load Balancer • D. Amazon CloudFront

• C. ELB Application Load Balancer ECS = Elastic Container Service The key here is that the application has a "WebSocket component" and traffic must be "distributed between micro-services." *So, what are the basic differences between CLB and ALB?* *Classic Load Balancing:* This more closely resembles traditional load balancing, but virtual devices replace physical hardware to evenly distribute your incoming requests and ensure clean, fast user experience. *Application Load Balancing:* Application load balancing identifies incoming traffic and directs it to the right resource type. For example, URLs tagged with /API extensions can be routed to the appropriate application resources, while traffic bound for /MOBILE can be directed to resources managing mobile access. https://www.sumologic.com/insight/aws-elastic-load-balancers-classic-vs-application/

An AWS workload in a VPC is running a legacy database on an Amazon EC2 instance. Data is stored on a 200GB Amazon EBS (gp2) volume. At peak load times, logs show excessive wait time. What solution should be implemented to improve database performance using persistent storage? • A. Migrate the data on the Amazon EBS volume to an SSD-backed volume. • B. Change the EC2 instance type to one with EC2 instance store volumes. • C. Migrate the data on the EBS volume to provisioned IOPS SSD (io1). • D. Change the EC2 instance type to one with burstable performance.

• C. Migrate the data on the EBS volume to provisioned IOPS SSD (io1). The key here is the EBS (gp2) reference. GP = General Purpose and allows for 10,000 IOPS. If the problem is peak load times, then up the IOPS performance using the provisioned version which allows for up to 20,000 IOPS.

A development team is building an application with front-end and backend application tiers. Each tier consists of Amazon EC2 instances behind an ELB Classic Load Balancer. The instances run in Auto Scaling groups across multiple Availability Zones. The network team has allocated the 10.0.0.0/24 address space for this application. Only the front-end load balancer should be exposed to the Internet. There are concerns about the limited size of the address space and the ability of each tier to scale. What should the VPC subnet design be in each Availability Zone? • A. One public subnet for the load balancer tier, one public subnet for the front-end tier, and one private subnet for the backend tier. • B. One shared public subnet for all tiers of the application. • C. One public subnet for the load balancer tier and one shared private subnet for the application tiers. • D. One shared private subnet for all tiers of the application.

• C. One public subnet for the load balancer tier and one shared private subnet for the application tiers. This is a bad question because C (best choice among bad choices) technically violates the requirements. All answers violate the requirements, but C is the best of a bad situation. Requirements: 1.) *Only front-end* load balancer is exposed The scenariou gives two (front/back) subnets each with an ELB Load Balancer. Thus, if you put the load balancer tier in a public subnet, you are putting 2 (front-end and back-end tiers) in a public subnet unless you cna restrict access t the private subnet. This violates requirement of ONLY front-end load balancer being exposed. It would probably be better to keep the 2 private subnets and allow the ELB access to only one (front end private subnet) and then use security groups to allow access from front end private subnet to back end private subnet. This question assumes ELB in public subnet will control all traffic to private subnet, but I would argue this is exposing both applications. *Option A* doesn't work because again it violates public subnet by placing both load balancers and the entire front-end tier of the applications into the public realm. *Option B* does not make sense because the whole point is to reduce public exposure per requirement. Making everthing public violates the requirement. *Option D* makes sense except that it doesn't leave anything exposed to the public. If there was a second line to this that said expose front-end load balancer to public, we would be good.

An organization runs an online media site, hosted on-premises. An employee posted a product review that contained videos and pictures. The review went viral and the organization needs to handle the resulting spike in website traffic. What action would provide an immediate solution? • A. Redesign the website to use Amazon API Gateway, and use AWS Lambda to deliver content. • B. Add server instances using Amazon EC2 and use Amazon Route 53 with a failover routing policy. • C. Serve the images and videos via an Amazon CloudFront distribution created using the news site as the origin. • D. Use Amazon ElasticCache for Redis for caching and reducing the load requests from the origin.

• C. Serve the images and videos via an Amazon CloudFront distribution created using the news site as the origin. The key here is that you need an *immediate solution* --- otherwise, option A would be the best. In this instance though, you just need something right now and CloudFront would be best for this distribution.

A Solutions Architect is designing a stateful web application that will run for one year (24/7) and then be decommissioned. Load on this platform will be constant, using a number of r4.8xlarge instances. Key drivers for this system include high availability, but elasticity is not required. What is the MOST cost-effective way to purchase compute for this platform? • A. Scheduled Reserved Instances • B. Convertible Reserved Instances • C. Standard Reserved Instances • D. Spot Instances

• C. Standard Reserved Instances https://aws.amazon.com/ec2/pricing/reserved-instances/ Standards saves 40% over 1 year or 60% over 3 years Convertible saves 31% over 1 year or 54% over 3 years *Standard RIs:* These provide the most significant discount (up to 75% off On-Demand) and are *best suited for steady-state usage.* *Convertible RIs:* These provide a discount (up to 54% off On-Demand) and the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value. Like Standard RIs, Convertible RIs are *best suited for steady-state usage.* *Scheduled RIs:* These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

A news organization plans to migrate their 20 TB video archive to AWS. The files are rarely accessed, but when they are, a request is made in advance and a 3 to 5-hour retrieval time frame is acceptable. However, when there is a breaking news story, the editors require access to archived footage within minutes. Which storage solution meets the needs of this organization while providing the LOWEST cost of storage? • A. Store the archive in Amazon S3 Reduced Redundancy Storage. • B. Store the archive in Amazon Glacier and use standard retrieval for all content. • C. Store the archive in Amazon Glacier and pay the additional charge for expedited retrieval when needed. • D. Store the archive in Amazon S3 with a lifecycle policy to move this to S3 Infrequent Access after 30 days.

• C. Store the archive in Amazon Glacier and pay the additional charge for expedited retrieval when needed.

A company plans to use AWS for all new batch processing workloads. The company's developers use Docker containers for the new batch processing. The system design must accommodate critical and non-critical batch processing workloads 24/7. How should a Solutions Architect design this architecture in a cost-efficient manner? • A. Purchase Reserved Instances to run all containers. Use Auto Scaling groups to schedule jobs. • B. Host a container management service on Spot Instances. Use Reserved Instances to run Docker containers. • C. Use Amazon ECS orchestration and Auto Scaling groups: one with Reserve Instances, one with Spot Instances. • D. Use Amazon ECS to manage container orchestration. Purchase Reserved Instances to run all batch workloads at the same time.

• C. Use Amazon ECS orchestration and Auto Scaling groups: one with Reserve Instances, one with Spot Instances. ECS (Elastic Container Service) should be used based on the fact that the company is using Docker containers. Auto Scaling groups with reserved instances makes sense because reserved instances can save money especially with a 3 year contract. This also satisfies the 24/7 requirement for critical batch processing. The non-critical batch processing can occur on the Spot instances (which are super cheap in comparison) because they can be completed whenever (no time constraint).

A Solutions Architect is designing a new social media application. The application must provide a secure method for uploading profile photos. Each user should be able to upload a profile photo into a shared storage location for one week after their profile is created. Which approach will meet all of these requirements? • A. Use Amazon Kinesis with AWS CloudTrail for auditing the specific times when profile photos are uploaded. • B. Use Amazon EBS volumes with IAM policies restricting user access to specific time periods. • C. Use Amazon S3 with the default private access policy and generate pre-signed URLs each time a new site profile is created. • D. Use Amazon CloudFront with AWS CloudTrail for auditing the specific times when profile photos are uploaded.

• C. Use Amazon S3 with the default private access policy and generate pre-signed URLs each time a new site profile is created. The key points in the question are "secure method for uploading" and "one week" time period. Page 34 The object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials to grant *time-limited permission* to download the objects. The pre-signed URLs are valid only for the *specified duration.*

A company is launching a marketing campaign on their website tomorrow and expects a significant increase in traffic. The website is designed as a multi-tiered web architecture, and the increase in traffic could potentially overwhelm the current design. What should a Solutions Architect do to minimize the effects from a potential failure in one or more of the tiers? • A. Migrate the database to Amazon RDS. • B. Set up DNS failover to a statistic website. • C. Use Auto Scaling to keep up with the demand. • D. Use both a SQL and a NoSQL database in the design.

• C. Use Auto Scaling to keep up with the demand.

A Solutions Architect is building a multi-tier website. The web servers will be in a public subnet, and the database servers will be in a private subnet. Only the web servers can be accessed from the Internet. The database servers must have Internet access for software updates. Which solution meets the requirements? • A. Assign Elastic IP addresses to the database instances. • B. Allow Internet traffic on the private subnet through the network ACL. • C. Use a NAT Gateway. • D. Use an egress-only Internet Gateway.

• C. Use a NAT Gateway.

A Solutions Architect is designing network architecture for an application that has compliance requirements. The application will be hosted on Amazon EC2 instances in a private subnet and will be using Amazon S3 for storing data. The compliance requirements mandate that the data cannot traverse the public Internet. What is the MOST secure way to satisfy this requirement? • A. Use a NAT Instance. • B. Use a NAT Gateway. • C. Use a VPC endpoint. • D. Use a Virtual Private Gateway.

• C. Use a VPC endpoint. Answer is C because the EC2 in your VPC will use a gateway endpoint to communicate with the S3 bucket and the S3 will use the interface endpoint of your VPC to reach the EC2. This means it will not "traverse the public internet" meeting the requirement. Also, utilizing a VPC Endpoint means it will never leave Amazon's network! Chapter 7 summary acloud.guru. All of the other options are for "public internet." The NAT will translate the private IP to a Public IP and is only used for "public internet" IP addresses. The VPG will encrypt the traffic, but also uses the public internet. The only option that stays in the private subnet is option C which contains all traffic within the VPC using that endpoint to communicate across private subnets.

A Solutions Architect is designing solution with AWS Lambda where different environments require different database passwords. What should the Architect do to accomplish this in a secure and scalable way? • A. Create a Lambda function for each individual environment. • B. Use Amazon DynamoDB to store environmental variables. • C. Use encrypted AWS Lambda environmental variables. • D. Implement a dedicated Lambda function for distributing variables.

• C. Use encrypted AWS Lambda environmental variables.

A company is migrating its data center to AWS. As part of this migration, there is a three-tier web application that has strict data-at-rest encryption requirements. The customer deploys this application on Amazon EC2 using Amazon EBS, and now must provide encryption at-rest. How can this requirement be met without changing the application? • A. Use AWS Key Management Service and move the encrypted data to Amazon S3. • B. Use an application-specific encryption API with AWS server-side encryption. • C. Use encrypted EBS storage volumes with AWS-managed keys. • D. Use third-party tools to encrypt the EBS data volumes with Key Management Service Bring Your Own Keys.

• C. Use encrypted EBS storage volumes with AWS-managed keys. The key is "without changing the application." If the application is already designed fro use with EBS and EC2, then Option A moving to S3 would violate the requirement. Option B and D don't make sense. So the answer is C - use encrypted EBS storage volumes.

A Solutions Architect is designing a solution that includes a managed VPN connection. To monitor whether the VPN connection is up or down, the Architect should use: • A. an external service to ping the VPN endpoint from outside the VPC. • B. AWS CloudTrail to monitor the endpoint. • C. the CloudWatch TunnelState Metric. • D. an AWS Lambda function that parses the VPN connection logs.

• C. the CloudWatch TunnelState Metric. CloudTrail = Auditing CloudWatch = Monitoring Option D is possible, but doesn't make sense - why re-invent the wheel when CloudWatch already does it for you.

A company's development team plans to create an Amazon S3 bucket that contains millions of images. The team wants to maximize the read performance of Amazon S3. Which naming scheme should the company use? • A. Add a date as the prefix. • B. Add a sequential id as the suffix. • C. Add a hexadecimal hash as the suffix. • D. Add a hexadecimal hash as the prefix.

• D. Add a hexadecimal hash as the prefix. https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html It appears that S3 buckets can be optimized based on a *prefix*. The question then becomes which is better - the date or the hexadecimal hash. I can only assume the date is insufficient for larger work loads where there are thousands of requests per second and when you space that over a day, it could be millions. So I assume that is why the hexadecimal (which is more detailed/granular) would be the better option.

An Administrator is hosting an application on a single Amazon EC2 instance, which users can access by the public hostname. The administrator is adding a second instance, but does not want users to have to decide between many public hostnames. Which AWS service will decouple the users from specific Amazon EC2 instances? • A. Amazon SQS • B. Auto Scaling group • C. Amazon EC2 security group • D. Amazon ELB

• D. Amazon ELB What is interesting is that ELB can actually provide "sticky sessions" which actually "bind a user's session to a specific instance" (page 116). AWS ELB can also provide "synchronous decoupling" where is "distributes incoming application traffic across multiple EC2 instances, in multiple AZs." https://medium.com/@pablo.iorio/synchronous-and-asynchronous-aws-decoupling-solutions-1bbe74697db9

A Solutions Architect is designing an architecture for a mobile gaming application. The application is expected to be very popular. The Architect needs to prevent the Amazon RDS MySQL database from becoming a bottleneck due to frequently accessed queries. Which service or feature should the Architect add to prevent a bottleneck? • A. Multi-AZ feature on the RDS MySQL Database • B. ELB Classic Load Balancer in front of the web application tier • C. Amazon SQS in front of RDS MySQL Database • D. Amazon ElastiCache in front of the RDS MySQL Database

• D. Amazon ElastiCache in front of the RDS MySQL Database Page 251 The key words are "frequently accessed queries" in the question. Page 250/251 states "Compared to retrieving data from an in-memory cache, querying a database is an expensive operation. By storing or moving frequently accessed data in-memory, application developers can significantly improve the performance and responsiveness of read-heavy applications.

A company has an application that stores sensitive data. The company is required by government regulations to store multiple copies of its data. What would be the MOST resilient and cost-effective option to meet this requirement? • A. Amazon EFS • B. Amazon RDS • C. AWS Storage Gateway • D. Amazon S3

• D. Amazon S3

A media company asked a Solutions Architect to design a highly available storage solution to serve as a centralized document store for their Amazon EC2 instances. The storage solution needs to be POSIX-compliant, scale dynamically, and be able to serve up to 100 concurrent EC2 instances. Which solution meets these requirements? • A. Create an Amazon S3 bucket and store all of the documents in this bucket. • B. Create an Amazon EBS volume and allow multiple users to mount that volume to their EC2 instance(s). • C. Use Amazon Glacier to store all of the documents. • D. Create an Amazon Elastic File System (Amazon EFS) to store and share the documents.

• D. Create an Amazon Elastic File System (Amazon EFS) to store and share the documents. https://docs.aws.amazon.com/efs/latest/ug/creating-using.html Amazon EFS provides elastic, shared file storage that is POSIX-compliant. The file system you create supports concurrent read and write access from multiple Amazon EC2 instances and is accessible from all of the Availability Zones in the AWS Region where it is created. You can mount an Amazon EFS file system on EC2 instances in your virtual private cloud (VPC) based on Amazon VPC using the Network File System versions 4.0 and 4.1 protocol

A popular e-commerce application runs on AWS. The application encounters performance issues. The database is unable to handle the amount of queries and load during peak times. The database is running on the RDS Aurora engine on the largest instance size available.What should an administrator do to improve performance? • A. Convert the database to Amazon Redshift. • B. Create a CloudFront distribution. • C. Convert the database to use EBS Provisioned IOPS. • D. Create one or more read replicas.

• D. Create one or more read replicas. The key is that the engine is already the "largest instance size available." This means it cannot be made larger negating vertical scaling. There is no option for sharding/partioning. So the only thing that can be done to remove load is to create more read replicas and distribute the load across each. Page 171 Another important scaling technique is to use read replicas to off load read transactions from the primary database and increase the overall number of transactions. Do this when: - Scale beyond the capacity of a single DB Instance for read-heavy workloads. - Handle read traffic while the source DB instance is unavailable. - Offload reporting or data warehousing scenarios against a replica instead of the primary DB Instance. ***It is important to note that there are other ways to scale, but those were not options in this question. Other ways to scale could be vertical scaling (larger instances), horizontal scaling (use of partitioning or sharding).

A Solutions Architect is designing a solution for a media company that will stream large amounts of data from an Amazon EC2 instance. The data streams are typically large and sequential, and must be able to support up to 500 MB/s. Which storage type will meet the performance requirements of this application? • A. EBS Provisioned IOPS SSD • B. EBS General Purpose SSD • C. EBS Cold HDD • D. EBS Throughput Optimized HDD

• D. EBS Throughput Optimized HDD

A Solutions Architect needs to design a solution that will enable a security team to detect, review, and perform root cause analysis of security incidents that occur in a cloud environment. The Architect must provide a centralized view of all API events for current and future AWS regions. How should the Architect accomplish this task? • A. Enable AWS CloudTrail logging in each individual region. Repeat this for all future regions. • B. Enable Amazon CloudWatch logs for all AWS services across all regions and aggregate them in a single Amazon S3 bucket. • C. Enable AWS Trusted Advisor security checks and report all security incidents for all regions. • D. Enable AWS CloudTrail by creating a new trail and apply the trail to all regions.

• D. Enable AWS CloudTrail by creating a new trail and apply the trail to all regions. The key here is the "centralized view" and understanding how CloudTrail works at its core. Option A doesn't make sense because CloudTrail can automatically replicate itself across all regions when created so why perform the same task for each region? Very inefficient. Option B uses CloudWatch (monitoring) when we really need CloudTrail (auditing). Option C doesn't make sense. Page 276 When you create a trail that applies to all AWS regions, AWS CloudTrail creates the same trail in each region, records the log files in each region, and delivers the log files to the single Amazon S3 bucket that you specify. ^^ The above statement means that you can configure CloudTrail to replicate the trail in all regions across your infrastructure and it will automatically aggregate to your single designation point. Hence, Answer is D - just make sure when creating the trail "All Regions" is selected and a single point (S3 bucket) is designated.

A Solution Architect is designing a disaster recovery solution for a 5 TB Amazon Redshift cluster. The recovery site must be at least 500 miles (805 kilometers) from the live site. How should the Architect meet these requirements? • A. Use AWS CloudFormation to deploy the cluster in a second region. • B. Take a snapshot of the cluster and copy it to another Availability Zone. • C. Modify the Redshift cluster to span two regions. • D. Enable cross-region snapshots to a different region.

• D. Enable cross-region snapshots to a different region. Page 176 Amazon Redshift supports both automated snapshots and manual snapshots. You can also perform manual snapshots and share them across regions or even with other AWS accounts. In regards to the distance between availability zones, there is not a set amount, but I did find references to 20 kilometers between AZs at a minimum. However, the greater distance would be to use cross-region replication. I can't find anything that actually uses miles or kilometers between AZs or Regions as a standard other than that 20 kilometer min for AZs.

A company is launching a static website using the zone apex (mycompany.com). The company wants to use Amazon Route 53 for DNS. Which steps should the company perform to implement a scalable and cost-effective solution? *(Choose two)* • A. Host the website on an Amazon EC2 instance with ELB and Auto Scaling, and map a Route 53 alias record to the ELB endpoint. • B. Host the website using AWS Elastic Beanstalk, and map a Route 53 alias record to the Beanstalk stack. • C. Host the website on an Amazon EC2 instance, and map a Route 53 alias record to the public IP address of the Amazon EC2 instance. • D. Serve the website from an Amazon S3 bucket, and map a Route 53 alias record to the website endpoint. • E. Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53 name servers.

• D. Serve the website from an Amazon S3 bucket, and map a Route 53 alias record to the website endpoint. • E. Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53 name servers. Key word is "static" website. This should immedidately trigger thoughts of S3 for hosting. All other options are out because S3 is the obvious choice for "static" content. So the only other option since it is a pick 2 is option E. Option E meets the desire to use Amazon Route 53.

A Solutions Architect is designing the storage layer for a production relational database. The database will run on Amazon EC2. The database is accessed by an application that performs intensive reads and writes, so the database requires the LOWEST random I/O latency. Which data storage method fulfills the above requirements? • A. Store data in a filesystem backed by Amazon Elastic File System (EFS). • B. Store data in Amazon S3 and use a third-party solution to expose Amazon S3 as a filesystem to the database server. • C. Store data in Amazon Dynamo DB and emulate relational database semantics. • D. Stripe data across multiple Amazon EBS volumes using RAID 0.

• D. Stripe data across multiple Amazon EBS volumes using RAID 0. https://searchstorage.techtarget.com/definition/RAID-0-disk-striping RAID 0 (disk striping) is the process of dividing a body of data into blocks and spreading the data blocks across multiple storage devices, such as hard disks or solid-state drives (SSDs), in a redundant array of independent disks (RAID) group. A stripe consists of the data divided across the set of hard disks or SSDs, and a striped unit refers to the data slice on an individual drive. Because striping spreads data across more physical drives, multiple disks can access the contents of a file, enabling writes and reads to be completed more quickly. However, unlike other RAID levels, RAID 0 does not have parity. Disk striping without parity data does not have redundancy or fault tolerance. That means, if a drive fails, all data on that drive is lost.

A customer has written an application that uses Amazon S3 exclusively as a data store. The application works well until the customer increases the rate at which the application is updating information. The customer now reports that outdated data occasionally appears when the application accesses objects in Amazon S3. What could be the problem, given that the application logic is otherwise correct? • A. The application is reading parts of objects from Amazon S3 using a range header. • B. The application is reading objects from Amazon S3 using parallel object requests. • C. The application is updating records by writing new objects with unique keys. • D. The application is updating records by overwriting existing objects with the same keys.

• D. The application is updating records by overwriting existing objects with the same keys. The key to this question is the concept of "updating information" and "outdated data occasionally appears." This brings up a question of latency which is essentially how fast the data is transferred or updated (more latency is bad - less latency is good - think of latency as time - so longer to update is bad - shorter time to update is good). Page 28 Amazon S3 is an *"eventually consistent"* system. Because your data is automatically replicated across multiple servers and locations within a region, changes in your data may take some time to propagate to all locations. For PUTs (new objects being written to S3) to new objects, this is *NOT* a concern as Amazon S3 provides read-after-write consistency. For PUTs to existing objects (object overwrite to an existing key) and for object DELETEs, Amazon S3 provides *"eventual consistency"*. Eventual consistency means that if you PUT new data to an existing key (overwrite), a subsequent GET might return the old data. Thus, D is correct because it is "overwriting existing objects." Option C is incorrect because it is "writing new objects" which has read-after-write consistency (which means it is immediate - NO latency, NO time delay).

A client notices that their engineers often make mistakes when creating Amazon SQS queues for their backend system. Which action should a Solutions Architect recommend to improve this process? • A. Use the AWS CLI to create queues using AWS IAM Access Keys. • B. Write a script to create the Amazon SQS queue using AWS Lambda. • C. Use AWS Elastic Beanstalk to automatically create the Amazon SQS queues. • D. Use AWS CloudFormation Templates to manage the Amazon SQS queue creation.

• D. Use AWS CloudFormation Templates to manage the Amazon SQS queue creation.


संबंधित स्टडी सेट्स

World History Final Semester 1(Buehlmaier/Mann)

View Set

Use and Effects of Drugs Chapter 3 DHB

View Set

Animales para designar equipos en Quizlet

View Set

DoD Mandatory Controlled Unclassified Information (CUI) Training

View Set

Chapter 12, Section 4: British Imperialism in India

View Set