aws-saa-c02
Question #265Topic 1 A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.What should a solutions architect do to accomplish this? A. Use AWS Config rules to define and detect resources that are not properly tagged. B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually. C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance. D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
A
Question #260Topic 1 A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses, which were purchased earlier this year.Which Amazon EC2 pricing option is the MOST cost-effective? A. Dedicated Reserved Hosts B. Dedicated On-Demand Hosts C. Dedicated Reserved Instances D. Dedicated On-Demand Instances
A Answer is A: requirement is "software has a software licensing model using sockets and cores" dedicated-hosts = visibility of sockets and physical cores You have visibility of the number of sockets and physical cores that support your instances on a Dedicated Host.You can use this information to manage licensing for your own server-bound software that is licensed per-socket or per-core. see link below https://aws.amazon.com/ec2/dedicated-hosts/
Question #246Topic 1 A company is developing a mobile game that streams score updates to a backend processor and then posts results on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution.What should the solutions architect do to meet these requirements? A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling. Store the processed updates in Amazon Redshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
A Gotta go with A here.
Question #256Topic 1 A solutions architect needs to ensure that all Amazon Elastic Block Store (Amazon EBS) volumes restored from unencrypted EBC snapshots are encrypted.What should the solutions architect do to accomplish this? A. Enable EBS encryption by default for the AWS Region. B. Enable EBS encryption by default for the specific volumes. C. Create a new volume and specify the symmetric customer master key (CMK) to use for encryption. D. Create a new volume and specify the asymmetric customer master key (CMK) to use for encryption.
A People! it has to be A!! Question asked is to ensure that ALL volumes restored are encrypted. So have to be "Enable encryption by default" . Read here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default
Question #270Topic 1 A company has a hybrid application hosted on multiple on-premises servers with static IP addresses. There is already a VPN that provides connectivity between the VPC and the on-premises network. The company wants to distribute TCP traffic across the on-premises servers for internet users.What should a solutions architect recommend to provide a highly available and scalable solution? A. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB. B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB. C. Launch an Amazon EC2 instance, attach an Elastic IP address, and distribute traffic to the on-premises servers. D. Launch an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distribute traffic to the on-premises servers.
A We're talking about Layer 4, it has to be A.
Question #248Topic 1 A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows least-privilege access rules.
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows least-privilege access rules.
Question #257Topic 1 A company runs a static website through its on-premises data center. The company has multiple servers that handle all of its traffic, but on busy days, services are interrupted and the website becomes unavailable. The company wants to expand its presence globally and plans to triple its website traffic.What should a solutions architect recommend to meet these requirements? A. Migrate the website content to Amazon S3 and host the website on Amazon CloudFront. B. Migrate the website content to Amazon EC2 instances with public Elastic IP addresses in multiple AWS Regions. C. Migrate the website content to Amazon EC2 instances and vertically scale as the load increases. D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for each AWS Region that exists globally.
A it is : Cloud front can host static websites as per below snippet. Snippet from AWS documentation : Description: Amazon CloudFront is a global Content Delivery Network (CDN), which will host your website on a global network of edge servers, helping users load your website more quickly. When requests for your website content come through, they are automatically routed to the nearest edge location, closest to where the request originated from, so your content is delivered to your end user with the best possible performance.
Question #258Topic 1 A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.What should the solutions architect recommend? A. Implement EC2 Spot Instances. B. Purchase EC2 Reserved Instances. C. Implement EC2 On-Demand Instances. D. Implement the processing on AWS Lambda.
A. Job can be started and stopped at any given time with no negative impact. Perfect scenario. can be started and stopped at any given time" -> Flexible workloads so Spot instances
Question #243Topic 1 A company has copied 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-west-2 Region. The colocation facility does not allow the use of AWS Snowball.What should a solutions architect recommend to accomplish this? A. Order a Snowball Edge device to copy the data from one Region to another Region. B. Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console. C. Use the aws S3 sync command to copy data from the source bucket to the destination bucket. D. Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.
Answer C => https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket/
Question #245Topic 1 A company is deploying a multi-instance application within AWS that requires minimal latency between the instances.What should a solutions architect recommend? A. Use an Auto Scaling group with a cluster placement group. B. Use an Auto Scaling group with single Availability Zone in the same AWS Region. C. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region. D. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets.
Answer is A https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Question #244Topic 1 A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The companyג€™s data science team wants to query ingested data in near-real time.Which solution provides near-real-time data querying that is scalable with minimal data loss? A. Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data. B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data. C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data. D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
Answer is B. Kinesis data streams consists of shards. The more througput is needed, the more shards you add, the less throughput, the more shards you remove, so it's scalable. Each shard can handle up to 1MB/s of writes. However Kinesis data streams stores ingested data for only 1 to 7 days so there is a chance of data loss. Additionally, Kinesis data analytics and kinesis data streams are both for real-time ingestion and analytics. Firehouse on the other hand is also scalable and processes data in near real time as per the requirement. It also transfers data into Redshift which is a data warehouse so data won't be lost. Redshift also has a SQL interface for performing queries for data analytics. This information was sourced from ultimate AWS certified solutions architect 2020 course with Stephane Maarek.
Question #247Topic 1 A company is building a document storage application on AWS. The application runs on Amazon EC2 instances in multiple Availability Zones. The company requires the document store to be highly available. The documents need to be returned immediately when requested. The lead engineer has configured the application to use Amazon Elastic Block Store (Amazon EBS) to store the documents, but is willing to consider other options to meet the availability requirement.What should a solutions architect recommend? A. Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones. B. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. C. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier. D. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.
B
Question #253Topic 1 A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is scalable and elastic.What should the solutions architect do to accomplish this? A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations. C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.
B A. This isn't a scalable and elastic option. B. Sounds about right, Api Gateway is scalable, and elastic, same as Lambda. C. How is this elastic? We need an ASG. D. It doesn't have elasticity or scalability.
Question #261Topic 1 A company is designing a website that uses an Amazon S3 bucket to store static images. The company wants all future requests to have faster response times while reducing both latency and cost.Which service configuration should a solutions architect recommend? A. Deploy a NAT server in front of Amazon S3. B. Deploy Amazon CloudFront in front of Amazon S3. C. Deploy a Network Load Balancer in front of Amazon S3. D. Configure Auto Scaling to automatically adjust the capacity of the website.
B A. What does a NAT server have to do with S3? B. Sounds about right. C. What? D. What? x2
Question #266Topic 1 A company has a live chat application running on its on-premises servers that use WebSockets. The company wants to migrate the application to AWS.Application traffic is inconsistent, and the company expects there to be more traffic with sharp spikes in the future.The company wants a highly scalable solution with no server maintenance nor advanced capacity planning.Which solution meets these requirements? A. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity. B. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. C. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. D. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.
B Answer is B. EC2 cannot be correct as question states "no server maintenance"
Question #273Topic 1 A company has an image processing workload running on Amazon Elastic Container Service (Amazon ECS) in two private subnets. Each private subnet uses aNAT instance for internet access. All images are stored in Amazon S3 buckets. The company is concerned about the data transfer costs between Amazon ECS and Amazon S3.What should a solutions architect do to reduce costs? A. Configure a NAT gateway to replace the NAT instances. B. Configure a gateway endpoint for traffic destined to Amazon S3. C. Configure an interface endpoint for traffic destined to Amazon S3. D. Configure Amazon CloudFront for the S3 bucket storing the images.
B B. Configure a gateway endpoint for traffic destined to Amazon S3. Because data transfer between Amazon ECS and Amazon S3 without going out the Internet, so using S3 Gateway Endpoint is enough.
Question #262Topic 1 A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future.Which service should a solutions architect recommend? A. Amazon Aurora MySQL B. Amazon Aurora Serverless for MySQL C. Amazon Redshift Spectrum D. Amazon RDS for MySQL
B Correct Ans B , with A you still have to choose the instance type
Question #239Topic 1 A company has media and application files that need to be shared internally. Users currently are authenticated using Active Directory and access files from aMicrosoft Windows platform. The chief executive officer wants to keep the same user permissions, but wants the company to improve the process as the company is reaching its storage capacity limit.What should a solutions architect recommend? A. Set up a corporate Amazon S3 bucket and move all media and application files. B. Configure Amazon FSx for Windows File Server and move all the media and application files. C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files. D. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS) volumes, and move all media and application files.
B It says that the files need to be shared internally, and it's using Active Directory. Amazon FsX for Windows sounds about right. (B).
Question #242Topic 1 A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are processed to extract metadata, which takes less than5 seconds. The volume and frequency of the uploads varies from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to design a cost-effective architecture that will meet these requirements.What should the solutions architect recommend? A. Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files. B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded to Amazon S3. Invoke an AWS Lambda function to process the files.
B The problem with C is how it sends the data to S3, if it was Firehose it would make sense. I think it's B.
Question #271Topic 1 Management has decided to deploy all AWS VPCs with IPv6 enabled. After some time, a solutions architect tries to launch a new instance and receives an error stating that there is not enough IP address space available in the subnet.What should the solutions architect do to fix this? A. Check to make sure that only IPv6 was used during the VPC creation. B. Create a new IPv4 subnet with a larger range, and then launch the instance. C. Create a new IPv6-only subnet with a large range, and then launch the instance. D. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.
B This one is tricky. A. How can this fix the issue? B. This could work. C. This won't work since it's saying, only subnet with IPv6 since you can't disable IPv4. D. You can't disable IPv4 cidr. cannot be A, C & D as "You cannot disable IPv4 support for your VPC and subnets; this is the default IP addressing system for Amazon VPC and Amazon EC2." in no way can you just use IPv6 So the answer is B
Question #252Topic 1 A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to anAmazon DynamoDB table. The metadata is used for searching and navigation.The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.What should a solutions architect recommend to support this workload? A. Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded. B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table. C. Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete. D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.
B The first use case in this link: https://aws.amazon.com/step-functions/use-cases/
Question #259Topic 1 A company is hosting its static website in an Amazon S3 bucket, which is the origin for Amazon CloudFront. The company has users in the United States, Canada, and Europe and wants to reduce costs.What should a solutions architect recommend? A. Adjust the CloudFront caching time to live (TTL) from the default to a longer timeframe. B. Implement CloudFront events with Lambda@Edge to run the websiteג€™s data processing. C. Modify the CloudFront price class to include only the locations of the countries that are served. D. Implement a CloudFront Secure Sockets Layer (SSL) certificate to push security closer to the locations of the countries that are served.
C A. This could be an option, since static content won't change that much. B. It's a static website, there is no processing. C. Sounds about right. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html D. What does an SSL have to do with reducing costs?
Question #268Topic 1 A company hosts a training site on a fleet of Amazon EC2 instances. The company anticipates that its new course, which consists of dozens of training videos on the site, will be extremely popular when it is released in 1 week.What should a solutions architect do to minimize the anticipated server load? A. Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API. B. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume. C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. D. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.
C A. We would have to store the videos on an RDS, not sure if that would be suitable..... B. How does this help minimize the anticipated server load? C. Sounds about right, the best option probably. D. We're not running anything on premise. Any inputs.
Question #274Topic 1 The financial application at a company stores monthly reports in an Amazon S3 bucket. The vice president of finance has mandated that all access to these reports be logged and that any modifications to the log files be detected.Which actions can a solutions architect take to meet these requirements? A. Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled. B. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled. C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation. D. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
C C. AWS CloudTrail, configurate read and write data events on the S3 bucket (include API call) Amazon S3 Server Access is wrong. Server access logging provides detailed records for the requests that are made to a bucket. (GET, PUT, DELETE... but not include API call) https://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.html
Question #254Topic 1 An application is running on an Amazon EC2 instance and must have millisecond latency when running the workload. The application makes many small reads and writes to the file system, but the file system itself is small.Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect attach to their EC2 instance? A. Cold HDD (sc1) B. General Purpose SSD (gp2) C. Provisioned IOPS SSD (io1) D. Throughput Optimized HDD (st1)
C : SSD-backed volumes include the highest performance Provisioned IOPS SSD (io2 and io1) for latency-sensitive transactional workloads and General Purpose SSD (gp2) that balance price and performance for a wide variety of transactional data. HDD-backed volumes include Throughput Optimized HDD (st1) for frequently accessed, throughput intensive workloads and the lowest cost Cold HDD (sc1) for less frequently accessed data.
Question #240Topic 1 A company is deploying a web portal. The company wants to ensure that only the web portion of the application is publicly accessible. To accomplish this, theVPC was designed with two public subnets and two private subnets. The application will run on several Amazon EC2 instances in an Auto Scaling group. SSL termination must be offloaded from the EC2 instances.What should a solutions architect do to ensure these requirements are met? A. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. B. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer. C. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. D. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
C since Internet-facing Application Load Balancers (ALB) and Classic ELBs must be provisioned exclusively in public subnets.
Question #263Topic 1 A company needs to comply with a regulatory requirement that states all emails must be stored and archived externally for 7 years. An administrator has created compressed email files on premises and wants a managed service to transfer the files to AWS storage.Which managed service should a solutions architect recommend? A. Amazon Elastic File System (Amazon EFS) B. Amazon S3 Glacier C. AWS Backup D. AWS Storage Gateway
D
Question #255Topic 1 A solutions architect is designing a multi-Region disaster recovery solution for an application that will provide public API access. The application will use AmazonEC2 instances with a userdata script to load application code and an Amazon RDS for MySQL database. The Recovery Time Objective (RTO) is 3 hours and theRecovery Point Objective (RPO) is 24 hours.Which architecture would meet these requirements at the LOWEST cost? A. Use an Application Load Balancer for Region failover. Deploy new EC2 instances with the userdata script. Deploy separate RDS instances in each Region. B. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region. C. Use Amazon API Gateway for the public APIs and Region failover. Deploy new EC2 instances with the userdata script. Create a MySQL read replica of the RDS instance in a backup Region. D. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script for APIs, and create a snapshot of the RDS instance daily for a backup. Replicate the snapshot to a backup Region.
D A. Application Load Balancer is region based, so this ain't right.https://aws.amazon.com/elasticloadbalancing/ B. We can use Route 53 for a Region failover, but, why create a read replica? we need a snapshot. C. Sounds fishy using a read replica again. D. Sounds about right, we create a snapshot of the RDS instance, and replicate the snapshot for a backup Region.
Question #250Topic 1 A company wants to share forensic accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has its own AWS account and requires its own copy of the database.How should the company securely share the database with the auditor? A. Create a read replica of the database and configure IAM standard database authentication to grant the auditor access. B. Copy a snapshot of the database to Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket. C. Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket. D. Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key.
D A. The question says the auditor needs its own copy of the database. A read replica won't do this request. B. We can't have direct access to the bucket in S3. C. Sounds a lot of work, I doubt, someone is going to be auditing from text files. D. Sounds reasonable. Making an encrypted snapshot, the auditor, will have it's own copy of the database.
Question #269Topic 1 A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime.Which solution meets these requirements MOST cost-effectively? A. Use Spot Instances exclusively to handle the maximum capacity required. B. Use Reserved Instances exclusively to handle the maximum capacity required. C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
D This should be D since the problem should "continually process messages without any downtime". Using spot instances above the baseline could possibly cause instance termination and thus downtime. This one sounds tricky but let's review it. A. Nop, we need to process messages without any downtime. B. It would be a waste to have instances running when there is intermittent traffic. C. Could be, but we can't use Spot Instances D. Sounds about right, even though on-demand is expensive, there can't be any downtime.
Question #251Topic 1 A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems.Which design should a solutions architect recommend? A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume. B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume. C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets. D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.
D makes complete sense. Think about it, you can have sale of diffrent types of cars happening simultaneously. For ex, toyota might have its own queue. Since RDS sends notification to SNS. IT HAS TO BE D. :) https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
Question #272Topic 1 A company has a build server that is in an Auto Scaling group and often has multiple Linux instances running. The build server requires consistent and mountable shared NFS storage for jobs and configurations.Which storage option should a solutions architect recommend? A. Amazon S3 B. Amazon FSx C. Amazon Elastic Block Store (Amazon EBS) D. Amazon Elastic File System (Amazon EFS)
D. EFS -> NFS
Question #267Topic 1 A company hosts its static website content from an Amazon S3 bucket in the us-east-1 Region. Content is made available through an Amazon CloudFront origin pointing to that bucket. Cross-Region replication is set to create a second copy of the bucket in the ap-southeast-1 Region. Management wants a solution that provides greater availability for the website.Which combination of actions should a solutions architect take to increase availability? (Choose two.) A. Add both buckets to the CloudFront origin. B. Configure failover routing in Amazon Route 53. C. Create a record in Amazon Route 53 pointing to the replica bucket. D. Create an additional CloudFront origin pointing to the ap-southeast-1 bucket. E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.
DE https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html A. Wrong, one bucket has been added as origin already. B. We're not using Route 53. C. We're not using Route 53. D. Correct, we have to add the new bucket as an origin. E. We setup a CloudFront origin group with us-east-1 bucket and ap-southeast-1 as the secondary.
Question #241Topic 1 A company is experiencing growth as demand for its product has increased. The companyג€™s existing purchasing application is slow when traffic spikes. The application is a monolithic three-tier application that uses synchronous transactions and sometimes sees bottlenecks in the application tier. A solutions architect needs to design a solution that can meet required application response times while accounting for traffic volume spikes.Which solution will meet these requirements? A. Vertically scale the application instance using a larger Amazon EC2 instance size. B. Scale the applicationג€™s persistence layer horizontally by introducing Oracle RAC on AWS. C. Scale the web and application tiers horizontally using Auto Scaling groups and an Application Load Balancer. D. Decouple the application and data tiers using Amazon Simple Queue Service (Amazon SQS) with asynchronous AWS Lambda calls.
If I'm not mistaken it's C, since for D, it's using asynchronous AWS Lambda calls and the application uses synchronous transactions.
Question #249Topic 1 A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because the individual developers will have AWS account root user-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer accounts is not modified.Which action meets these requirements? A. Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user. B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled. C. Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts. D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account.
It's C. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
Question #264Topic 1 A company has hired a new cloud engineer who should not have access to an Amazon S3 bucket named CompanyConfidential. The cloud engineer must be able to read from and write to an S3 bucket called AdminTools.Which IAM policy will meet these requirements?
Question #264Topic 1 A company has hired a new cloud engineer who should not have access to an Amazon S3 bucket named CompanyConfidential. The cloud engineer must be able to read from and write to an S3 bucket called AdminTools.Which IAM policy will meet these requirements?
Question #191Topic 1 A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company's weather forecasting applications are based in a single Region and analyze the data daily.What is the FASTEST way to aggregate data from all of these global sites? A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket. B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. D. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon EBS volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.
A
Question #437Topic 1 A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure.Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.What is the MOST operationally efficient solution that meets these requirements? A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days. D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
A "solution to ingest and store the alerts for future analysis" so B and D is out. The Answer is A because any data older than 14 days must be archived not deleted.
Question #410Topic 1 A company is deploying an application in three AWS Regions using an Application Load Balancer Amazon Route 53 will be used to distribute traffic between theseRegions.Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience? A. Create an A record with a latency policy. B. Create an A record with a geolocation policy. C. Create a CNAME record with a failover policy. D. Create a CNAME record with a geoproximity policy.
A I think A is right answer as having website in 3 regions can also mean website with cluster of webservers and from high performance perspective latency policy is best. Referring to this question and answer is A record A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? Create an A record pointing to the IP address of the load balancer Create a CNAME record pointing to the load balancer DNS name. Create a CNAME record aliased to the load balancer DNS name. Create an A record aliased to the load balancer DNS name https://jayendrapatil.com/tag/route-53/
Question #237Topic 1 A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.What should a solutions architect propose to improve the performance of the workload? A. Choose a cluster placement group while launching Amazon EC2 instances. B. Choose dedicated instance tenancy while launching Amazon EC2 instances. C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances. D. Choose the required capacity reservation while launching Amazon EC2 instances.
A paraphrase "with tightly coupled node-to-node communication" >> needs cluster placement group --> Answer (A).
Question #451Topic 1 A company is migrating its applications to AWS. Currently, applications that run on premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data.The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity.Which solutions should the company use for the data transfer to meet these requirements? A. AWS DataSync B. AWS Migration Hub C. AWS Snowball Edge Storage Optimized D. AWS Transfer for SFTP
A Keyword: a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. https://aws.amazon.com/datasync/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc What happens if an AWS DataSync task is interrupted? A: If a task is interrupted, for instance, if the network connection goes down or the AWS DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. When do I use AWS DataSync and when do I use AWS Snowball Edge? A: AWS DataSync is ideal for online data transfers. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS Snowball Edge is ideal for offline data transfers, for customers who are bandwidth constrained, or transferring data from remote, disconnected, or austere environments.
Question #449Topic 1 A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files. The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners.Which solution meets these requirements? A. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3. B. Use AWS Snowball Edge for local storage and large-scale data transfers. C. Use Amazon FSx to store and transfer files to make them available remotely. D. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.
A The AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon EFS. With support for Secure File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP), the AWS Transfer Family helps you seamlessly migrate your file transfer workflows to AWS by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. The AWS Transfer Family lets you preserve your existing data exchange processes while taking advantage of the superior economics, data durability, and security of Amazon S3 or Amazon EFS.
Question #436Topic 1 A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.Which solution meets these requirements? A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL. B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL. C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances. D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
A - minimum effort to manage DB, DB type matching.
Question #210Topic 1 A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data. Which combination of storage and caching should the solutions architect use? A. Amazon S3 with Amazon CloudFront B. Amazon S3 Glacier with Amazon ElastiCache C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront D. AWS Storage Gateway with Amazon ElastiCache
A is correct
Question #227Topic 1 An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.Which solution will allow least privilege access to the DynamoDB table from the EC2 instance? A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role. C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly. D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
A is correct Roles are designed to be "assumed" by other principals which do define "who am I?", such as users, Amazon services, and EC2 instances. An instance profile, on the other hand, defines "who am I?" Just like an IAM user represents a person, an instance profile represents EC2 instances. The only permissions an EC2 instance profile has is the power to assume a role. So the EC2 instance runs under the EC2 instance profile, defining "who" the instance is. It then "assumes" the IAM role, which ultimately gives it any real power. https://medium.com/devops-dudes/the-difference-between-an-aws-role-and-an-instance-profile-ae81abd700d#:~:text=Roles%20are%20designed%20to%20be,instance%20profile%20represents%20EC2%20instances.
Question #207Topic 1 A company is migrating a NoSQL database cluster to Amazon EC2. The database automatically replicates data to maintain at least three copies of the data. I/O throughput of the servers is the highest priority. Which instance type should a solutions architect recommend for the migration? A. Storage optimized instances with instance store B. Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume C. Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled D. Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
A is correct though I am bit sceptical about Instance Store Volume but as we keeping 3 copies of data we always have options to recover from backup.
Question #415Topic 1 A disaster response team is using drones to collect images of recent storm damage. The response teamג€™s laptops lack the storage and compute capacity to transfer the images and process the data. While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the damage.What should a solutions architect recommend? A. Use AWS Snowball Edge devices to process and store the images. B. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances. C. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images. D. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.
A is the answer for sure B is wrong for sure since "network connectivity is intermittent and unreliable" so using SQS "during intermittent connectivity " is a recipe for disaster C is wrong as again "multiple delivery streams" when "network connectivity is intermittent and unreliable" won't cut it D is wrong as you don't "pre-installed" AWS Storage Gateway
Question #421Topic 1 A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC.Which combination of steps should a solutions architect take to accomplish this? (Choose two.) A. Configure a VPC gateway endpoint for Amazon S3 within the VPC. B. Create a bucket policy to make the objects in the S3 bucket public. C. Create a bucket policy that limits access to only the application tier running in the VPC. D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance. E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.
A,C
Question #457Topic 1 A company is migrating a large, mission-critical database to AWS. A solutions architect has decided to use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000 Provisioned IOPS for storage. The solutions architect is using AWS Database Migration Service (AWS DMS) to perform the data migration. The migration is taking longer than expected, and the company wants to speed up the process. The companyג€™s network team has ruled out bandwidth as a limiting factor.Which actions should the solutions architect take to speed up the migration? (Choose two.) A. Disable Multi-AZ on the target DB instance. B. Create a new DMS instance that has a larger instance size. C. Turn off logging on the target DB instance until the initial load is complete. D. Restart the DMS task on a new DMS instance with transfer acceleration enabled. E. Change the storage type on the target DB instance to Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2).
A,C Why A and C? Turn off backups and transaction logging When migrating to an Amazon RDS database, it's a good idea to turn off backups and Multi-AZ on the target until you're ready to cut over. Similarly, when migrating to systems other than Amazon RDS, turning off any logging on the target until after cutover is usually a good idea. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.Performance Why not B? The questions establishes that "A solutions architect has decided to use an Amazon RDS for MySQL Multi-AZ DB instance that is deployed with 80,000 Provisioned IOPS for storage" Checking the AWS DMS replication instance for migration available from this link: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html#CHAP_ReplicationInstance.Types.Deciding we can take dms.r5.24xlarge which can we give us a maximum of 80,000 IOPS based on this other link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html
Question #416Topic 1 A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance is the application, data layer that uses Oracle-specific PSQL functions. Traffic to the application has been steadily increasing. This is causing the EC2 instances to become overloaded and RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable rate before leveling off.What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Choose two.) A. Configure storage Auto Scaling on the RDS for Oracle instance. B. Migrate the database to Amazon Aurora to use Auto Scaling storage. C. Configure an alarm on the RDS for Oracle instance for low free storage space. D. Configure the Auto Scaling group to use the average CPU as the scaling metric. E. Configure the Auto Scaling group to use the average free memory as the scaling metric.
A,D Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon provides some default CloudWatch Metrics like CPUUtilization or DiskWriteOps but to scale based on Memory Utilization we are forced to create a custom metric.
Question #201Topic 1 A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.What should a solutions architect recommend to meet the clientsג€™ needs? A. A Network Load Balancer with an associated Elastic IP address. B. An Application Load Balancer with an associated Elastic IP address C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address D. An EC2 instance with a public IP address running as a proxy in front of the load balancer Reveal Solution
A. Can't attach EIP to ALB Network Load Balancer automatically provides a static IP per Availability Zone (subnet) that can be used by applications as the front-end IP of the load balancer. Network Load Balancer also allows you the option to assign an Elastic IP per Availability Zone (subnet) thereby providing your own fixed IP.
Question #448Topic 1 A company is using Amazon Redshift for analytics and to generate customer reports. The company recently acquired 50 TB of additional customer demographic data. The data is stored in .csv files in Amazon S3. The company needs a solution that joins the data and visualizes the results with the least possible cost and effort.What should a solutions architect recommend to meet these requirements? A. Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations. B. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations. C. Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift. D. Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.
A. https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html https://searchaws.techtarget.com/definition/Amazon-Redshift-Spectrum Redshift Spectrum vs. Athena Amazon Athena is similar to Redshift Spectrum, though the two services typically address different needs. An analyst that already works with Redshift will benefit most from Redshift Spectrum because it can quickly access data in the cluster and extend out to infrequently accessed, external tables in S3. It's also better suited for fast, complex queries on multiple data sets. Alternatively, Athena is a simpler way to run interactive, ad hoc queries on data stored in S3. It doesn't require any cluster management, and an analyst only needs to define a table to make a standard SQL query.
Question #232Topic 1 A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on AWS in case the on-premises data center fails.The company runs web servers that connect to external vendors. The data available on AWS and on premises must be uniform.Which solution should a solutions architect recommend that has the LEAST amount of downtime? A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. B. Configure an Amazon Route 53 failover record. Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. C. Configure an Amazon Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on Amazon EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer. D. Configure an Amazon Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. Set up an AWS Direct Connect connection between a VPC and the data center.
A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
Question #419Topic 1 A company has an Amazon S3 bucket that contains mission-critical data. The company wants to ensure this data is protected from accidental deletion. The data should still be accessible, and a user should be able to delete the data intentionally.Which combination of steps should a solutions architect take to accomplish this? (Choose two.) A. Enable versioning on the S3 bucket. B. Enable MFA Delete on the S3 bucket. C. Create a bucket policy on the S3 bucket. D. Enable default encryption on the S3 bucket. E. Create a lifecycle policy for the objects in the S3 bucket.
A. Enable versioning on the S3 bucket. B. Enable MFA Delete on the S3 bucket.
Question #418Topic 1 A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains over 10 million rows. The database has 2 TB of General Purpose SSD (gp2) storage. There are millions of updates against this data every day through the companyג€™s website. The company has noticed some operations are taking 10 seconds or longer and has determined that the database storage performance is the bottleneck.Which solution addresses the performance issue? A. Change the storage type to Provisioned IOPS SSD (io1). B. Change the instance to a memory-optimized instance class. C. Change the instance to a burstable performance DB instance class. D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
A. This is a case for I/O intensive operation: Getting the best performance from Amazon RDS Provisioned IOPS SSD storage If your workload is I/O constrained, using Provisioned IOPS SSD storage can increase the number of I/O requests that the system can process concurrently. Increased concurrency allows for decreased latency because I/O requests spend less time in a queue. Decreased latency allows for faster database commits, which improves response time and allows for higher database throughput. I agree with A, but I'd like to focus why it's not B. My thought is that with 10 million rows, even if it was a memory optimized instance, there far too much data and having more performing memory would not be beneficial, since it's highly probable that data must be pulled/pushed to/from storage. Anyone with a better explanation?
Question #435Topic 1 A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the companyג€™s on-premises data centers in the United States, Asia, and Europe. The companyג€™s compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.What should a solutions architect do to meet these requirements? A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS. D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection. We cannot attach NLB to cloudfront. It can only be ALB with cloudfront. NLB and AWS global accelerator makes sense
Question #443Topic 1 A company is deploying an application that processes streaming data in near-real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes.Which combination of network solutions will meet these requirements? (Choose two.) A. Enable and configure enhanced networking on each EC2 instance. B. Group the EC2 instances in separate accounts. C. Run the EC2 instances in a cluster placement group. D. Attach multiple elastic network interfaces to each EC2 instance. E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
AC. A:https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/enhanced-networking.html ...Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. C:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html, Cluster - packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
Question #228Topic 1 A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic Container Service (Amazon ECS) task types running onAmazon EC2 instances that are part of an ECS cluster. The output and state data for all tasks needs to be stored. The amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time. The system should be optimized for high-frequency reading and writing. As old outputs are archived and deleted, the storage size is not expected to exceed 1 TB.Which storage solution should the solutions architect recommend? A. An Amazon DynamoDB table accessible by all ECS cluster instances. B. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode. C. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode. D. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
Agree with B. With the Bursting Throughput mode, which is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. Using the bursting throughput mode does not incur any additional charges and you have a baseline rate of 50 KB/s per GB of throughput that comes included with the price you pay for your EFS standard storage. Provisioned Throughput allows you to burst above your allocated allowance, which is based upon your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quickly enough. In this instance, you would need to use provisioned throughput. However, this option does incur additional charges where you will need to pay for any bursting above the default capacity allowed from the standard bursting throughput.
Question #233Topic 1 A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solutions architect needs to find a scalable and secure solution.What should the solutions architect recommend? A. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center. B. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center. C. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center. D. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
Ans : A sure Check link https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-vpn.html
Question #223Topic 1 A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.What should the solutions architect do to enable internet access for the private subnets? A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ. C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway. D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress- only internet gateway.
Ans = A 1. NAT Gateways = connect Private Subnets to the Internet 2. NAT gateways = Highly available 3. NAT Gateway = a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an Amazon Virtual Private Cloud (Amazon VPC). Previously, you needed to launch a NAT instance to enable NAT for instances in a private subnet. 4. VPN NAT Gateways reside in Public Subnets > have to configure NAT Gateways in Public Subnet > then associate NAT Gateways in Private Subnet route table 5. Reference: https://aws.amazon.com/about-aws/whats-new/2018/03/introducing-amazon-vpc-nat-gateway-in-the-aws-govcloud-us-region/#:~:text=NAT%20Gateway%20is%20a%20highly,instances%20in%20a%20private%20subnet. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
Question #221Topic 1 A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.Which solution meets these requirements? A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option. B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot. C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases. D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
Ans A https://aws.amazon.com/rds/features/multi-az/ To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the "Modify" option corresponding to your DB Instance in the AWS Management Console.
Question #217Topic 1 A company uses Amazon Redshift for its data warehouse. The company wants to ensure high durability for its data in case of any component failure.What should a solutions architect recommend? A. Enable concurrency scaling. B. Enable cross-Region snapshots. C. Increase the data retention period. D. Deploy Amazon Redshift in Multi-AZ.
Ans B, enable cross region snapshots. That will improve durability. Multi-AZ is not supported with RedShift. https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-improves-performance-of-inter-region-snapshot-transfers/ Performance enhancements have been made that allow Amazon Redshift to copy snapshots across regions much faster, allowing customers to support much more aggressive Recovery Time Objective (RTO) and Recovery Point Objective (RPO) Disaster Recovery (DR) policies
Question #405Topic 1 A media company has an application that tracks user clicks on its websites and performs analytics to provide near-real time recommendations. The application has a Heel of Amazon EC2 instances that receive data from the websites and send the data to an Amazon RDS DB instance. Another fleet of EC2 instances hosts the portion of the application that is continuously checking changes in the database and executing SQL queries to provide recommendations. Management has requested a redesign to decouple the infrastructure. The solution must ensure that data analysts are writing SQL to analyze the data only No data can the lost during the deployment.What should a solutions architect recommend? A. Use Amazon Kinesis Data Streams to capture the data from the websites Kinesis Data Firehose to persist the data on Amazon S3, and Amazon Athena to query the data. B. Use Amazon Kinesis Data Streams to capture the data from the websites. Kinesis Data Analytics to query the data, and Kinesis Data Firehose to persist the data on Amazon S3. C. Use Amazon Simple Queue Service (Amazon SQS) to capture the data from the websites, keep the fleet of EC2 instances, and change to a bigger instance type in the Auto Scaling group configuration. D. Use Amazon Simple Notification Service (Amazon SNS) to receive data from the websites and proxy the messages to AWS Lambda functions that execute the queries and persist the data. Change Amazon RDS to Amazon Aurora Serverless to persist the data.
Ans. B - address all points - realtime, clickstream data, additionally serverless and data persisted, C doesn't address Data analysis part also not serverless or how data got persisted. A and D - not apprporiate.
Question #402Topic 1 A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.Which solution meets these requirements and is the MOST operationally efficient? A. Server-side encryption with customer-provided keys (SSE-C) B. Server-side encryption with Amazon S3 managed keys (SSE-S3) C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation
Ans. D - "operationally efficient" https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
Question #413Topic 1 A company that operates a web application on premises is preparing to launch a newer version of the application on AWS. The company needs to route requests to either the AWS-hosted or the on-premises-hosted application based on the URL query string. The on-premises application is not available from the internet, and a VPN connection is established between Amazon VPC and the companyג€™s data center. The company wants to use an Application Load Balancer (ALB) for this launch.Which solution meets these requirements? A. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to each target group of each ALB. Route with Amazon Route 53 based on the URL query string. B. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string. C. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string. D. Use one ALB with two AWS Auto Scaling groups: one for the AWS resource and one for on premises. Add hosts to each Auto Scaling group. Route with Amazon Route 53 based on the URL query string.
Ans: C After research A- I don't find Anything about routing with query string in Route53 B - Create an EC2 its not an option. C - You can use listeners to routing with query string and have only one LB is better in management. D - Auto-scaling group for on-premises don't make sense for me https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#query-string-conditions
Question #202Topic 1 A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.What should a solutions architect recommend? A. Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance. B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones. C. Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet. D. Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones. Reveal Solution
Ans=B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
Question #208Topic 1 A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with ActiveDirectory for access control.Which solution will satisfy these requirements? A. Configure Amazon EFS storage and set the Active Directory domain for authentication. B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones. C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume. D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication. Reveal Solution
Ans=D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
Question #204Topic 1 A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experiences highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.What should the solutions architect recommend? A. Install MySQL on Amazon EC2 in the secondary Region. B. Migrate the database to Amazon Aurora with cross-Region replicas. C. Create another RDS for MySQL read replica in the secondary Region. D. Implement Amazon ElastiCache to improve database query performance.
Answer B: Aurora Replicas: Cross-region replication under 1 second RDS Replicas: Cross-region replication seconds
Question #216Topic 1 A company is building a website that relies on reading and writing to an Amazon DynamoDB database. The traffic associated with the website predictably peaks during business hours on weekdays and declines overnight and during weekends. A solutions architect needs to design a cost-effective solution that can handle the load.What should the solutions architect do to meet these requirements? A. Enable DynamoDB Accelerator (DAX) to cache the data. B. Enable Multi-AZ replication for the DynamoDB database. C. Enable DynamoDB auto scaling when creating the tables. D. Enable DynamoDB On-Demand capacity allocation when creating the tables.
Answer C. Enable DynamoDB auto scaling when creating the tables. Traffic is predictable according to the scenario cost effective to enable autoscaling on tables. 'DynamoDB auto scaling reduces the unused capacity in the area between the provisioned and consumed capacity. improved ratio of consumed to provisioned capacity, which reduces the wasted overhead while providing sufficient operating capacity. With on-demand, DynamoDB instantly allocates capacity as it is needed. There is no concept of provisioned capacity, and there is no delay waiting for CloudWatch thresholds or the subsequent table updates. On-demand is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and when underprovisioned capacity would impact the user experience. '
Question #235Topic 1 A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.What is the MOST cost-effective method to establish this type of connection? A. Implement a client VPN. B. Implement AWS Direct Connect. C. Implement a bastion host on Amazon EC2. D. Implement an AWS Site-to-Site VPN connection.
Answer D They are talking about connection between on-prem environment and AWS. So not client connections. So this has to be s2s VPN.
Question #219Topic 1 A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The companyג€™s security policy requires that all website traffic be inspected by AWS WAF.How should the solutions architect comply with these requirements? A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only. B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin. C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront. D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution.
Answer D. Use an OAI to lockdown CloudFront to S3 origin & enable WAF on CF distribution
Question #432Topic 1 A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution.Which solution should a solutions architect recommend to meet these requirements? A. Use Amazon Cognito Identity with SMS-based MFA. B. Edit IAM policies to require MFA for all users. C. Federate IAM against the corporate Active Directory that requires MFA. D. Use Amazon API Gateway and require server-side encryption (SSE) for photos.
Answer is A. https://aws.amazon.com/cognito/ Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect.
Question #431Topic 1 A user owns a MySQL database that is accessed by various clients who expect, at most, 100 ms latency on requests. Once a record is stored in the database, it is rarely changed. Clients only access one record at a time.Database access has been increasing exponentially due to increased client demand. The resultant load will soon exceed the capacity of the most expensive hardware available for purchase. The user wants to migrate to AWS, and is willing to change database systems.Which service would alleviate the database load issue and offer virtually unlimited scalability for the future? A. Amazon RDS B. Amazon DynamoDB C. Amazon Redshift D. AWS Data Pipeline
Answer is B. https://aws.amazon.com/dynamodb/ Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Question #213Topic 1 A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.What should a solutions architect recommend? A. Deploy Amazon Inspector and associate it with the ALB. B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule. C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic. D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Answer is B. New Rate-Based Rules Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:
Question #407Topic 1 A company is using Site-to-Site VPN connections for secure connectivity to its AWS Cloud resources from on premises. Due to an increase in traffic across theVPN connections to the Amazon EC2 instances, users are experiencing slower VPN connectivity.Which solution will improve the VPN throughput? A. Implement multiple customer gateways for the same network to scale the throughput. B. Use a transit gateway with equal cost multipath routing and add additional VPN tunnels. C. Configure a virtual private gateway with equal cost multipath routing and multiple channels. D. Increase the number of tunnels in the VPN configuration to scale the throughput beyond the default limit.
Answer is B. https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-vpn-throughput-using-aws-transit-gateway/ With AWS Transit Gateway, you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection. AWS Transit Gateway also enables you to scale the IPsec VPN throughput with equal cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default limit of 1.25 Gbps.
Question #438Topic 1 A company has two AWS accounts: Production and Development. There are code changes ready in the Development account to push to the Production account.In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers might need access to perform testing as well.What should a solutions architect recommend? A. Create two policy documents using the AWS Management Console in each account. Assign the policy to developers who need access. B. Create an IAM role in the Development account. Give one IAM role access to the Production account. Allow developers to assume the role. C. Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role. D. Create an IAM group in the Production account and add it as a principal in the trust policy that specifies the Production account. Add developers to the group.
Answer is C https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/: "One AWS account accesses another AWS account - This use case is commonly referred to as a cross-account role pattern. This allows human or machine IAM principals from other AWS accounts to assume this role and act on resources in this account." "Trust relationship - This policy defines which principals can assume the role, and under which conditions. This is sometimes referred to as a resource-based policy for the IAM role. We'll refer to this policy simply as the 'trust policy'."
Question #424Topic 1 A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.Which storage solution should a solutions architect recommend for use after the migration? A. AWS DataSync B. Amazon Elastic Block Store (Amazon EBS) C. Amazon Elastic File System (Amazon EFS) D. Amazon EMR File System (Amazon EMRFS)
Answer is C https://aws.amazon.com/efs/ Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget, elastic file system that lets you share file data without provisioning or managing storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Question #236Topic 1 A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs receive inconsistent traffic that can spike and drop throughout the year.The companyג€™s networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity.Which solution is the MOST scalable with minimal configuration changes? A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewallג€™s rule to allow the IP addresses of the ALBs. B. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewallג€™s rule to allow the Elastic IP addresses of all the NLBs. C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewallג€™s rule to allow static IP addresses associated with the accelerator. D. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on- premises firewallג€™s rule to allow the Elastic IP address attached to the NLB.
Answer is C https://aws.amazon.com/global-accelerator/faqs/: "Associate the static IP addresses provided by AWS Global Accelerator to regional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2 Instances, and Elastic IP addresses"
Question #441Topic 1 A companyג€™s HTTP application is behind a Network Load Balancer (NLB). The NLBג€™s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the applicationג€™s availability without writing custom scripts or code.What should a solutions architect do to meet these requirements? A. Enable HTTP health checks on the NLB, supplying the URL of the companyג€™s application. B. Add a cron job to the EC2 instances to check the local applicationג€™s logs once each minute. If HTTP errors are detected, the application will restart. C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the companyג€™s application. Configure an Auto Scaling action to replace unhealthy instances. D. Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.
Answer is C. NLB is layer 4 for TCP and UDP. ALB is best for HTTP and HTTPS
Question #412Topic 1 A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content. To meet the migration date, minimal changes can be made.What should a solutions architect do to meet these requirements? A. Create an Amazon S3 Standard bucket with access to the web server. B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. C. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers. D. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and mount them on all web servers.
Answer is C. https://aws.amazon.com/efs/ Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget, elastic file system that lets you share file data without provisioning or managing storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Question #425Topic 1 An application calls a service run by a vendor. The vendor charges based on the number of calls. The finance department needs to know the number of calls that are made to the service to validate the billing statements.How can a solutions architect design a system to durably store the number of calls without requiring changes to the application? A. Call the service through an internet gateway. B. Decouple the application from the service with an Amazon Simple Queue Service (Amazon SQS) queue. C. Publish a custom Amazon CloudWatch metric that counts calls to the service. D. Call the service through a VPC peering connection.
Answer is C. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html Metrics are data about the performance of your systems. By default, many services provide free metrics for resources (such as Amazon EC2 instances, Amazon EBS volumes, and Amazon RDS DB instances). You can also enable detailed monitoring for some resources, such as your Amazon EC2 instances, or publish your own application metrics. Amazon CloudWatch can load all the metrics in your account (both AWS resource metrics and application metrics that you provide) for search, graphing, and alarms.
Question #406Topic 1 A company runs an application that uses multiple Amazon EC2 instances to gather data from its users. The data is then processed and transferred to Amazon S3 for long-term storage. A review of the application shows that there were long periods of time when the EC2 instances were not being used. A solutions architect needs to design a solution that optimizes utilization and reduces costs.Which solution meets these requirements? A. Use Amazon EC2 in an Auto Scaling group with On-Demand instances. B. Build the application to use Amazon Lightsail with On-Demand Instances. C. Create an Amazon CloudWatch cron job to automatically stop the EC2 instances when there is no activity. D. Redesign the application to use an event-driven design with Amazon Simple Queue Service (Amazon SQS) and AWS Lambda.
Answer is D. Lamda and SQS are cheapest. With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. For SQS, First 1 Million Requests/Month Free Free
Question #428Topic 1 A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS, Amazon SimpleNotification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company resources are tagged with a tag name of ג€applicationג€ and a value that corresponds to each application. A solutions architect must provide the quickest solution for identifying all of the tagged components.Which solution meets these requirements? A. Use AWS CloudTrail to generate a list of resources with the application tag. B. Use the AWS CLI to query each service across all Regions to report the tagged components. C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag. D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag. Reveal Solution
Answer is D. https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html Tags are words or phrases that act as metadata that you can use to identify and organize your AWS resources. A resource can have up to 50 user-applied tags. It can also have read-only system tags. Each tag consists of a key and one optional value.
Question #429Topic 1 A development team is deploying a new product on AWS and is using AWS Lambda as part of the deployment. The team allocates 512 MB of memory for one of the Lambda functions. With this memory allocation, the function is completed in 2 minutes. The function runs millions of times monthly, and the development team is concerned about cost. The team conducts tests to see how different Lambda memory allocations affect the cost of the function.Which steps will reduce the Lambda costs for the product? (Choose two.) A. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute. B. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 90 seconds. C. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes. D. Increase the memory allocation for this Lambda function to 2,048 MB if this change causes the execution time of each function to be less than 1 minute. E. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 5 minutes.
Answers are A and C. The reasons are simple maths. If A is chosen, the memory used is double but the completion time is less than 2 minutes, not 2 minutes. Answer C is better and more efficient than answer E. A - to check that additional memory increase function performance, C - to check that lower memory decrease function performance.
Question #231Topic 1 A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.Which storage solution meets these requirements? A. Amazon S3 Standard B. Amazon S3 Intelligent-Tiering C. Amazon S3 Glacier Deep Archive D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
B
Question #422Topic 1 A solutions architect plans to convert a companyג€™s monolithic web application into a multi-tier application. The company wants to avoid managing its own infrastructure. The minimum requirements for the web application are high availability, scalability, and regional low latency during peak hours. The solution should also store and retrieve data with millisecond latency using the applicationג€™s API.Which solution meets these requirements? A. Use AWS Fargate to host the web application with backend Amazon RDS Multi-AZ DB instances. B. Use Amazon API Gateway with an edge-optimized API endpoint, AWS Lambda for compute, and Amazon DynamoDB as the data store. C. Use an Amazon Route 53 routing policy with geolocation that points to an Amazon S3 bucket with static website hosting and Amazon DynamoDB as the data store. D. Use an Amazon CloudFront distribution that points to an Elastic Load Balancer with an Amazon EC2 Auto Scaling group, along with Amazon RDS Multi-AZ DB instances.
B
Question #426Topic 1 A company wants to reduce its Amazon S3 storage costs in its production environment without impacting durability or performance of the stored objects.What is the FIRST step the company should take to meet these objectives? A. Enable Amazon Macie on the business-critical S3 buckets to classify the sensitivity of the objects. B. Enable S3 analytics to identify S3 buckets that are candidates for transitioning to S3 Standard-Infrequent Access (S3 Standard-IA). C. Enable versioning on all business-critical S3 buckets. D. Migrate the objects in all S3 buckets to S3 Intelligent-Tiering.
B A - Macie is for DLP & PII : unrelated B - Matches the purpose (reduce cost, keep performance & durability), and it has to be done FIRST C - Could be a good option if we had more context, maybe to prevent data loss if changing storage class of critical data to a one zone IA class ? Hard to decide. D - Cannot be intelligent tiering since it can move objects to archive & deep archive, thus impacting performance of the stored object => I go with B (d)
Question #238Topic 1 A company uses a legacy on-premises analytics application that operates on gigabytes of .csv files and represents months of data. The legacy application cannot handle the growing size of .csv files. New .csv files are added daily from various data sources to a central on-premises storage location. The company wants to continue to support the legacy application while users learn AWS analytics services. To achieve this, a solutions architect wants to maintain two synchronized copies of all the .csv files on-premises and in Amazon S3.Which solution should the solutions architect recommend? A. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the companyג€™s on-premises storage and the companyג€™s S3 bucket. B. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3. C. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3. D. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon EFS to the companyג€™s S3 bucket.
B A. If we want to one time migrate data, not for synchronization. B. Sounds the best use case for file gateway. C. We're not really backing up volumes. D. Same as A.
Question #230Topic 1 A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes.Which solution meets these requirements? A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones. B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data. C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data. D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.
B A. Sounds wrong, you'll have to enable Multi-AZ deployment. B. Sounds about right. C. Read replicas don't help or make the DB reliable. D. Nop, EC2 is not recommended.
Question #403Topic 1 A company is preparing to migrate its on-premises application to AWS. The application consists of application servers and a Microsoft SQL Server database The database cannot be migrated to a different engine because SQL Server features are used in the applicationג€™s NET code. The company wants to attain the greatest availability possible while minimizing operational and management overhead.What should a solutions architect do to accomplish this? A. Install SQL Server on Amazon EC2 in a Multi-AZ deployment. B. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment. C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas. D. Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment.
B Answer is B and not D because: Microsoft SQL Server Multi-AZ deployment notes and recommendations The following are some restrictions when working with Multi-AZ deployments for Microsoft SQL Server DB instances: Cross-Region Multi-AZ isn't supported. You can't configure the secondary DB instance to accept database read activity. Multi-AZ with Always On Availability Groups (AGs) supports in-memory optimization. Multi-AZ with Always On Availability Groups (AGs) doesn't support Kerberos authentication for the availability group listener. This is because the listener has no Service Principal Name (SPN). You can't rename a database on a SQL Server DB instance that is in a SQL Server Multi-AZ deployment. If you need to rename a database on such an instance, first turn off Multi-AZ for the DB instance, then rename the database. Finally, turn Multi-AZ back on for the DB instance. You can only restore Multi-AZ DB instances that are backed up using the full recovery model.
Question #225Topic 1 A company with facilities in North America, Europe, and Asia is designing new distributed application to optimize its global supply chain and manufacturing process. The orders booked on one continent should be visible to all Regions in a second or less. The database should be able to support failover with a shortRecovery Time Objective (RTO). The uptime of the application is important to ensure that manufacturing is not impacted.What should a solutions architect recommend? A. Use Amazon DynamoDB global tables. B. Use Amazon Aurora Global Database. C. Use Amazon RDS for MySQL with a cross-Region read replica. D. Use Amazon RDS for PostgreSQL with a cross-Region read replica. Reveal Solution
B There are 2 points, important in the question 1) write propagation 2) recovery should be very short. Eliminating C and D logically Aurora Global Db has less than second of point 1. Dynamo DB has millisecond The only difference is in recovery. There is no point mentioned in Dynamo Global Table for recovery its in Dynamo DB which has point in time recovery, not a recovery in seconds. But as Aurora Global spins secondary cluster its quickly in seconds promotes secondary to primary in case of primary failure. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html Comparing both I will go with B.
Question #214Topic 1 A company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.What should the solutions architect recommend to build the MOST high-performing solution? A. Use AWS Glue to process data and Amazon S3 to store data. B. Use Amazon EMR to process data and Amazon Redshift to store data. C. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data. D. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
B - Big data - EMR
Question #224Topic 1 As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.Which solution meets these requirements? A. Run a query with Amazon Athena to generate the report. B. Create a report in Cost Explorer and download the report. C. Access the bill details from the billing dashboard and download the bill. D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
B is ok - https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-reports.html
Question #455Topic 1 A company with a single AWS account runs its internet-facing containerized web application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.The EKS cluster is placed in a private subnet of a VPC. System administrators access the EKS cluster through a bastion host on a public subnet.A new corporate security policy requires the company to avoid the use of bastion hosts. The company also must not allow internet connectivity to the EKS cluster.Which solution meets these requirements MOST cost-effectively? A. Set up an AWS Direct Connect connection. B. Create a transit gateway. C. Establish a VPN connection. D. Use AWS Storage Gateway.
B or C ( costier) https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html Accessing a private only API server If you have disabled public access for your cluster's Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network. Here are a few possible ways to access the Kubernetes API server endpoint: Connected network - Connect your network to the VPC with an AWS transit gateway or other connectivity option and then use a computer in the connected network. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network. Amazon EC2 bastion host - You can launch an Amazon EC2 instance into a public subnet in your cluster's VPC and then log in via SSH into that instance to run kubectl commands. For more information, see Linux bastion hosts on AWS. You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your bastion host. For more information, see Amazon EKS security group considerations.
Question #218Topic 1 A company has data stored in an on-premises data center that is used by several on-premises applications. The company wants to maintain its existing application environment and be able to use AWS services for data analytics and future visualizations.Which storage service should a solutions architect recommend? A. Amazon Redshift B. AWS Storage Gateway for files C. Amazon Elastic Block Store (Amazon EBS) D. Amazon Elastic File System (Amazon EFS)
B use case for file gateway: "Hybrid cloud workflows using data generated by on-premises applications for processing by AWS services such as machine learning, big data analytics or serverless functions."
Question #411Topic 1 A company has an application workflow that uses an AWS Lambda function to download and decrypt files from Amazon S3. These files are encrypted using AWSKey Management Service Customer Master Keys (AWS KMS CMKs). A solutions architect needs to design a solution that will ensure the required permissions are set correctly.Which combination of actions accomplish this? (Choose two.) A. Attach the kms:decrypt permission to the Lambda functionג€™s resource policy. B. Grant the decrypt permission for the Lambda IAM role in the KMS keyג€™s policy. C. Grant the decrypt permission for the Lambda resource policy in the KMS keyג€™s policy. D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda function. E. Create a new IAM role with the kms:decrypt permission and attach the execution role to the Lambda function.
B,E for E, On your AWS Free Trial, go to Lambda > Functions > Create function > Change default execution role > Create a new role with basic Lambda permissions "Lambda will create an execution role named TestLambda-role-zfkzos11, with permission to upload logs to Amazon CloudWatch Logs." for B, on your AWS Free Trial, go to KMS > Customer Managed Keys > Define Usage Permissions (for TestLambda-role-zfkzos11) In Review, Key Policy, you will see below: { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::561296045111:role/service-role/TestLambda-role-zfkzos11" }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" },
Question #215Topic 1 A company is hosting an election reporting website on AWS for users around the world. The website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database. The website is updated with election results once an hour and has historically observed hundreds of users accessing the reports.The company is expecting a significant increase in demand because of upcoming elections in different countries. A solutions architect must improve the website's ability to handle additional demand while minimizing the need for additional EC2 instances.Which solution will meet these requirements? A. Launch an Amazon ElastiCache cluster to cache common database queries. B. Launch an Amazon CloudFront web distribution to cache commonly requested website content. C. Enable disk-based caching on the EC2 instances to cache commonly requested website content. D. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
B. looks good. The keyword is 'users around the world' = Cloudfront
Question #452Topic 1 A solutions architect is designing the architecture for a new web application. The application will run on AWS Fargate containers with an Application LoadBalancer (ALB) and an Amazon Aurora PostgreSQL database. The web application will perform primarily read queries against the database.What should the solutions architect do to ensure that the website can scale with increasing traffic? (Choose two.) A. Enable auto scaling on the ALB to scale the load balancer horizontally. B. Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically. C. Enable cross-zone load balancing on the ALB to distribute the load evenly across containers in all Availability Zones. D. Configure an Amazon Elastic Container Service (Amazon ECS) cluster in each Availability Zone to distribute the load across multiple Availability Zones. E. Configure Amazon Elastic Container Service (Amazon ECS) Service Auto Scaling with a target tracking scaling policy that is based on CPU utilization.
BE Aurora Auto Scaling for the DB ECS with auto-scaling Target tracking scaling policy for the Fargate
Question #195Topic 1 A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests.How should a solutions architect redesign the architecture to better respond to changing traffic? A. Configure a Network Load Balancer with a slow start configuration. B. Configure AWS ElastiCache for Redis to offload direct requests to the servers. C. Configure an Auto Scaling step scaling policy with an instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.
C
Question #222Topic 1 A company has a 10 Gbps AWS Direct Connect connection from its on-premises servers to AWS. The workloads using the connection are critical. The company requires a disaster recovery strategy with maximum resiliency that maintains the current connection bandwidth at a minimum.What should a solutions architect recommend? A. Set up a new Direct Connect connection in another AWS Region. B. Set up a new AWS managed VPN connection in another AWS Region. C. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region. D. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.
C
Question #423Topic 1 A team has an application that detects new objects being uploaded into an Amazon S3 bucket. The uploads trigger AWS Lambda function to write object metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database.Which action should the team take to ensure high availability? A. Enable Cross-Region Replication in the S3 bucket. B. Create a Lambda function for each Availability Zone the application is deployed in. C. Enable Multi-AZ on the RDS for PostgreSQL database. D. Create a DynamoDB stream for the DynamoDB table.
C DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability. You can use global tables to keep DynamoDB tables in sync across AWS Regions All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability.
Question #408Topic 1 A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity developers noticed slowdowns related to the gameג€™s metadata load times. Performance metrics indicate that simply scaling the database will not help. A solutions architect must explore all options that include capabilities for snapshots replication and sub-millisecond response times.What should the solutions architect recommend to solve these issues? A. Migrate the database to Amazon Aurora with Aurora Replicas. B. Migrate the database to Amazon DyramoDB with global tables. C. Add an Amazon ElastiCache for Redis layer in front of the database. D. Add an Amazon ElastiCache for Memcached layer in front of the database.
C A and B are out since "scaling the database will not help" https://aws.amazon.com/elasticache/redis-vs-memcached/ C is the answer
Question #211Topic 1 A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.What should the solutions architect do to reduce the overall data transfer costs? A. Place all the EC2 instances in an Auto Scaling group. B. Place all the EC2 instances in the same AWS Region. C. Place all the EC2 instances in the same Availability Zone. D. Place all the EC2 instances in private subnets in multiple Availability Zones.
C I think the answer might actually be C, because the transfer is between EC2 instances and not just between S3 and EC2. "Also, be aware of inter-Availability Zones data transfer charges between Amazon EC2 instances, even within the same region. If possible, the instances in a development or test environment that need to communicate with each other should be co-located within the same Availability Zone to avoid data transfer charges. (This doesn't apply to production workloads which will most likely need to span multiple Availability Zones for high availability.)" https://aws.amazon.com/blogs/mt/using-aws-cost-explorer-to-analyze-data-transfer-costs/
Question #446Topic 1 A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.What should a solutions architect do to mitigate any single point of failure in this architecture? A. Add a set of VPNs between the Management and Production VPCs. B. Add a second virtual private gateway and attach it to the Management VPC. C. Add a second set of VPNs to the Management VPC from a second customer gateway device. D. Add a second VPC peering connection between the Management VPC and the Production VPC.
C eliminates a single point of failure in mgt vpc (A)
Question #442Topic 1 A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between theseVPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.What is the MOST cost-effective solution to connect these VPCs? A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication. B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication. C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication. D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
C - Data Transfer within the same AWS Region, $0.01/Gb, and even free in some cases. Lower cost — With VPC peering you only pay for data transfer charges. Transit Gateway has an hourly charge per attachment in addition to the data transfer fees. https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-vpc-announces-pricing-change-for-vpc-peering/
Question #434Topic 1 A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual postprocessing.Which solution will meet these requirements? A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS. B. Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket. C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing. D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.
C - high performance file system.
Question #404Topic 1 A company has an application running on Amazon EC2 instances in a private subnet. The application needs to store and retrieve data in Amazon S3. To reduce costs, the company wants to configure its AWS resources in a cost-effective manner.How should the company accomplish this? A. Deploy a NAT gateway to access the S3 buckets. B. Deploy AWS Storage Gateway to access the S3 buckets. C. Deploy an S3 gateway endpoint to access the S3 buckets. D. Deploy an S3 interface endpoint to access the S3 buckets.
C is the right answer. Storage Gateway is used for connecting on-prem data to AWS
Question #229Topic 1 An online photo application lets users upload photos and perform image editing operations. The application offers two classes of service: free and paid. Photos submitted by paid users are processed before those submitted by free users. Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.Which configuration should a solutions architect recommend? A. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first. B. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling. C. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. D. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.
C, check this out: https://acloud.guru/forums/guru-of-the-week/discussion/-L7Be8rOao3InQxdQcXj/
Question #409Topic 1 A company has several Amazon EC2 instances set up in a private subnet for security reasons. These instances host applications that read and write large amounts of data to and from Amazon S3 regularly. Currently, subnet routing directs all the traffic destined for the internet through a NAT gateway. The company wants to optimize the overall cost without impacting the ability of the application to communicate with Amazon S3 or the outside internet.What should a solutions architect do to optimize costs? A. Create an additional NAT gateway. Update the route table to route to the NAT gateway. Update the network ACL to allow S3 traffic. B. Create an internet gateway. Update the route table to route traffic to the internet gateway. Update the network ACL to allow S3 traffic. C. Create a VPC endpoint for Amazon S3. Attach an endpoint policy to the endpoint. Update the route table to direct traffic to the VPC endpoint. D. Create an AWS Lambda function outside of the VPC to handle S3 requests. Attach an IAM policy to the EC2 instances, allowing them to invoke the Lambda function.
C. Create a VPC endpoint for Amazon S3. Attach an endpoint policy to the endpoint. Update the route table to direct traffic to the VPC endpoint.
Question #414Topic 1 A solutions architect is developing a multiple-subnet VPC architecture. The solution will consist of six subnets in two Availability Zones. The subnets are defined as public, private and dedicated for databases. Only the Amazon EC2 instances running in the private subnets should be able to access a database.Which solution meets these requirements? A. Create a now route table that excludes the route to the public subnetsג€™ CIDR blocks. Associate the route table to the database subnets. B. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance. C. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance. D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.
C. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance.
Question #401Topic 1 An online shopping application accesses an Amazon RDS Multi-AZ DB instance. Database performance is slowing down the application. After upgrading to the next-generation instance type, there was no significant performance improvement.Analysis shows approximately 700 IOPS are sustained, common queries run for long durations and memory utilization is high.Which application change should a solutions architect recommend to resolve these issues? A. Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection. B. Separate the long-running queries into a new Multi-AZ RDS database and modify the application to query whichever database is needed. C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed. D. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query it first and query the database only if needed.
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed.
Question #203Topic 1 A company's packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application's source code.What should a solutions architect do to reduce costs? A. Use Lambda@Edge to compress the files as they are sent to users. B. Enable Amazon S3 Transfer Acceleration to reduce the response times. C. Enable caching on the CloudFront distribution to store generated files at the edge. D. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
Correct Ans is A: Use Lambda@Edge. See the question "single-use text file" will be sent in response. Single-use text file means that file will be used only one time so whats the benefit of the caching it on the cloudfront as it will not be abled to be used again. So what other thing can be done is to use Lambda@Edge to compress the file which will reduce the size of the file and hence less data will be transferred and less will be the transfer charges.
Question #192Topic 1 A company has a custom application running on an Amazon EC instance that:ג€¢ Reads a large amount of data from Amazon S3ג€¢ Performs a multi-stage analysisג€¢ Writes the results to Amazon DynamoDBThe application writes a significant number of large, temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.What would be the fastest storage option for holding the temporary files? A. Multiple Amazon S3 buckets with Transfer Acceleration for storage. B. Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization. C. Multiple Amazon EFS volumes using the Network File System version 4.1 (NFSv4.1) protocol. D. Multiple instance store volumes with software RAID 0.
D
Question #193Topic 1 A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size. Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements.What is the MOST cost-effective storage solution for this situation? A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
D
Question #194Topic 1 A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real time with millisecond responsiveness.Which solution should the solutions architect recommend? A. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. B. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB. C. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. D. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
D
Question #454Topic 1 A company has a customer relationship management (CRM) application that stores data in an Amazon RDS DB instance that runs Microsoft SQL Server. The companyג€™s IT staff has administrative access to the database. The database contains sensitive data. The company wants to ensure that the data is not accessible to the IT staff and that only authorized personnel can view the data.What should a solutions architect do to secure the data? A. Use client-side encryption with an Amazon RDS managed key. B. Use client-side encryption with an AWS Key Management Service (AWS KMS) customer managed key. C. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) default encryption key. D. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.
D Answer D AWS KMS Custom master key will allow to create a key policy which will define uses/roles to access the master key
Question #206Topic 1 A company currently has 250 TB of backup files stored in Amazon S3 in a vendor's proprietary format. Using a Linux-based software application provided by the vendor, the company wants to retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload them to Amazon S3. The company wants to minimize the data transfer charges associated with this conversation.What should a solutions architect do to accomplish this? A. Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3. B. Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine. C. Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices. D. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re- upload the files to Amazon S3 from the EC2 instance.
D Data is already in S3. Snowball is to transfer data from data center to AWS. Hence B & C are wrong Lambda doesn't let you install custom software. So batch processing on S3 is not possible. As question asked to minimize the data transfer cost. D makes sense as you install EC2 in same region as S3
Question #220Topic 1 A company has a 143 TB MySQL database that it wants to migrate to AWS. The plan is to use Amazon Aurora MySQL as the platform going forward. The company has a 100 Mbps AWS Direct Connect connection to Amazon VPC.Which solution meets the companyג€™s needs and takes the LEAST amount of time? A. Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora. B. Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora. C. Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora. D. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.
D Just did the calculation. 100mbps will take 143 days. 500mbps will need to be ordered, then it'll take another 25 days. Snowmobile is an overkill. You guessed it, D is the right answer.
Question #433Topic 1 A company has an application that uses overnight digital images of products on store shelves to analyze inventory data. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and obtains the images from an Amazon S3 bucket for its metadata to be processed by worker nodes for analysis. A solutions architect needs to ensure that every image is processed by the worker nodes.What should the solutions architect do to meet this requirement in the MOST cost-efficient way? A. Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group. B. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images. C. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On- Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks. D. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
D Non-critical overnight workload is SPOT. Since the question says overnight, we will assume its non critical specially its just inventory, they can redo it again, so it must be A or D only. It needs SQS messaging for the worker node, so its D then.
Question #456Topic 1 A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The gameג€™s user base is increasing rapidly.What should a solutions architect do to improve the performance of the data tier? A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. B. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana. C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
D The answer is D Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis keywords: The game requires live location tracking of players based
Question #212Topic 1 A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead? A. Create a DX connection in each new account. Route the network traffic to the on-premises servers. B. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers. C. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. D. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
D The correct answer is D. AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router - each new connection is only made once.
Question #444Topic 1 A company is running a global application. The applicationג€™s users submit multiple videos that are then merged into a single video file. The application uses a single Amazon S3 bucket in the us-east-1 Region to receive uploads from users. The same S3 bucket provides the download location of the single video file that is produced. The final video file output has an average size of 250 GB.The company needs to develop a solution that delivers faster uploads and downloads of the video files that are stored in Amazon S2. The company will offer the solution as a subscription to users who want to pay for the increased speed.What should a solutions architect do to meet these requirements? A. Enable AWS Global Accelerator for the S3 endpoint. Adjust the applicationג€™s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription. B. Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription. C. Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription. D. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucketג€™s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
D When you create a CloudFront distribution with an origin pointing to your S3 bucket, you enable caching on Edge locations. Consequent requests to the same objects will be served from the Edge cache which is faster for the end user and also reduces the load on your origin. CloudFront is primarily used as a content delivery service. When you enable S3 Transfer Acceleration for your S3 bucket and use <bucket>.s3-accelerate.amazonaws.com instead of the default S3 endpoint, the transfers are performed via the same Edge locations, but the network path is optimized for long-distance large-object uploads. Extra resources and optimizations are used to achieve higher throughput. No caching on Edge locations.
Question #420Topic 1 A company has an on-premises business application that generates hundreds of files each day. These files are stored on an SMB file share and require a low- latency connection to the application servers. A new company policy states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.The application development team does not have time to make the necessary code modifications to move the application to AWS.Which service should a solutions architect recommend to allow the application to copy files to AWS? A. Amazon Elastic File System (Amazon EFS) B. Amazon FSx for Windows File Server C. AWS Snowball D. AWS Storage Gateway
D - storage gateway. My take: D - The files will be on the storgare gateway with low latency and copied to AWS as a second copy. FSx in AWS will not provide low latency for the on prem apps over a vpn to the FSx file system.
Question #234Topic 1 What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted? A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set. B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private. C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true. D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
D is ok: https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=Solution%20overview,console%2C%20CLI%2C%20or%20SDK.&text=To%20encrypt%20an%20object%20at,S3%2C%20or%20SSE%2DKMS.
Question #453Topic 1 A company captures ordered clickstream data from multiple websites and uses batch processing to analyze the data. The company receives 100 million event records, all approximately 1 KB in size, each day. The company loads the data into Amazon Redshift each night, and business analysts consume the data.The company wants to move toward near-real-time data processing for timely insights. The solution should process the streaming data while requiring the least possible operational overhead.Which combination of AWS services will meet these requirements MOST cost-effectively? (Choose two.) A. Amazon EC2 B. AWS Batch C. Amazon Simple Queue Service (Amazon SQS) D. Amazon Kinesis Data Firehose E. Amazon Kinesis Data Analytics
D,E
Question #427Topic 1 A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution.Which storage solution meets these requirements MOST cost-effectively? A. Amazon Elastic Block Store (Amazon EBS) B. Amazon Elastic File System (Amazon EFS) C. Amazon Elasticsearch Service (Amazon ES) D. Amazon S3
D. Amazon S3 is cheapest and can be accessed from anywhere.
Question #417Topic 1 An engineering team is developing and deploying AWS Lambda functions. The team needs to create roles and manage policies in AWS IAM to configure the permissions of the Lambda functions.How should the permissions for the team be configured so they also adhere to the concept of least privilege? A. Create an IAM role with a managed policy attached. Allow the engineering team and the Lambda functions to assume this role. B. Create an IAM group for the engineering team with an IAMFullAccess policy attached. Add all the users from the team to this IAM group. C. Create an execution role for the Lambda functions. Attach a managed policy that has permission boundaries specific to these Lambda functions. D. Create an IAM role with a managed policy attached that has permission boundaries specific to the Lambda functions. Allow the engineering team to assume this role.
D. Create an IAM role with a managed policy attached that has permission boundaries specific to the Lambda functions. Allow the engineering team to assume this role. For users and applications in your account that use Lambda, you manage permissions in a permissions policy that you can apply to IAM users, groups, or roles. To grant permissions to other accounts or AWS services that use your Lambda resources, you use a policy that applies to the resource itself. Option D with least privillages
Question #450Topic 1 A companyג€™s database is hosted on an Amazon Aurora MySQL DB cluster in the us-east-1 Region. The database is 4 TB in size. The company needs to expand its disaster recovery strategy to the us-west-2 Region. The company must have the ability to fail over to us-west-2 with a recovery time objective (RTO) of 15 minutes.What should a solutions architect recommend to meet these requirements? A. Create a Multi-Region Aurora MySQL DB cluster in us-east-1 and use-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us- west-2 upon failure. B. Take a snapshot of the DB cluster in us-east-1. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to copy the snapshot to us-west-2 and restore the snapshot in us-west-2 when failure is detected. C. Create an AWS CloudFormation script to create another Aurora MySQL DB cluster in us-west-2 in case of failure. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to deploy the AWS CloudFormation stack in us-west-2 when failure is detected. D. Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to promote the DB cluster in us-west-2 when failure is detected.
D. https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html For high availability across multiple Amazon Regions, you can set up Aurora global databases. Each Aurora global database spans multiple Amazon Regions, enabling low latency global reads and disaster recovery from outages across an Amazon Region. Aurora automatically handles replicating all data and updates from the primary Amazon Region to each of the secondary Regions. If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan. the only way to achieve multi region availability for aurora is to use aurora global database https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html
Question #205Topic 1 A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days.Which solution should a solutions architect recommend? A. Set the backup retention period to 90 days when creating the RDS DB instance. B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days. C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily. D. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
For me answer is C. how can one store RDS automated snapshots to a user-managed S3 bucket as far as I know RDS store manual and automated snapshots in its own managed S3 bucket which cannot be seen so how we can set a lifecycle policy on that. On the other hand we can control how frequently to take backup and how long to retain that backup in the backup plan. https://docs.aws.amazon.com/aws-backup/latest/devguide/how-it-works.html
Question #209Topic 1 A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.Which solution will meet these requirements? A. Amazon DynamoDB B. Amazon RDS for MySQL C. MySQL-compatible Amazon Aurora Serverless D. MySQL deployed on Amazon EC2 in an Auto Scaling group
For me it is C (B). From AWS Aurora Serverless: "It enables you to run your database in the cloud without managing any database instances. It's a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads." https://aws.amazon.com/rds/aurora/serverless/ The question clearly states that it is sporadic. Indeed, it is predictable because we do know when the traffic is low and when it increases. However, I think a serverless solution will better work for this kind of workload as it scales out and in only when it needs -->
Question #440Topic 1 A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.Which solution meets these requirements? A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets. B. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets. C. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads. D. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
It should be A. Client-side encryption occurs when an object is encrypted BEFORE you upload it to S3, and the keys are not managed by AWS. And it is what this question talk about.
Question #439Topic 1 A company is using an Amazon S3 bucket to store data uploaded by different departments from multiple locations. During an AWS Well-Architected review, the financial manager notices that 10 TB of S3 Standard storage data has been charged each month. However, in the AWS Management Console for Amazon S3, using the command to select all files and folders shows a total size of 5 TB.What are the possible causes for this difference? (Choose two.) A. Some files are stored with deduplication. B. The S3 bucket has versioning enabled. C. There are incomplete S3 multipart uploads. D. The S3 bucker has AWS Key Management Service (AWS KMS) enabled. E. The S3 bucket has Intelligent-Tiering enabled.
The answer is B & C According to multipart Multipart upload and pricing After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or stop the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for this multipart upload and its associated parts. If you stop the multipart upload, Amazon S3 deletes upload artifacts and any parts that you have uploaded, and you are no longer billed for them. For more information about pricing, see Amazon S3 pricing. https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-Ka9czoG6ryzzuyr6PFp/s3_versioning_costs
Question #226Topic 1 A companyג€™s near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.Which combination of steps should the solutions architect take? (Choose two.) A. Use Amazon Kinesis Data Firehose to ingest the data. B. Use AWS Lambda with AWS Step Functions to process the data. C. Use AWS Database Migration Service (AWS DMS) to ingest the data. D. Use Amazon EC2 instances in an Auto Scaling group to process the data. E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
The solution is A and E There are 2 ingestion and 3 processor Since the near real-time we choose Firehose - A - First step We are left with processor, B , D and E We know lambda can run max for 15 min and the job is of 30 min so lambda is out. https://aws.amazon.com/lambda/faqs/#:~:text=AWS%20Lambda%20functions%20can%20be,1%20second%20and%2015%20minutes. We are left with D and E Both will work but the question specifies serverless hence E - step 2 https://aws.amazon.com/fargate/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&fargate-blogs.sort-by=item.additionalFields.createdDate&fargate-blogs.sort-order=desc
Question #447Topic 1 A company is using AWS Organizations with two AWS accounts: Logistics and Sales. The Logistics account operates an Amazon Redshift cluster. The Sales account includes Amazon EC2 instances. The Sales account needs to access the Logistics accountג€™s Amazon Redshift cluster.What should a solutions architect recommend to meet this requirement MOST cost-effectively? A. Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data. B. Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account. C. Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account. D. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.
a. vpc shareing is ideal for different account but same organization. it is cost optimized VPC sharing allows multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, Amazon Redshift clusters, and AWS Lambda functions, into shared, centrally-managed virtual private clouds (VPCs). In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html, In a shared VPC, each participant pays for their application resources including Amazon EC2 instances, Amazon Relational Database Service databases, Amazon Redshift clusters, and AWS Lambda functions. Participants also pay for data transfer charges associated with inter-Availability Zone data transfer, data transfer over VPC peering connections, and data transfer through an AWS Direct Connect gateway.
Question #445Topic 1 https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c02/view/45/ The following IAM policy is attached to an IAM group. This is the only policy applied to the group.What are the effective IAM permissions of this policy for group members? A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied. B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA). C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action. D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region. Reveal Solution
https://www.examtopics.com/exams/amazon/aws-certified-solutions-architect-associate-saa-c02/view/45/
Question #430Topic 1 A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an AmazonEC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the companyג€™s internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security groups of all the EC2 instances will allow that access.Which combination of steps should the solutions architect take to meet these requirements? (Choose two.) A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances. B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company. C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company. D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host. E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host.
the answer is C & D because: According to D our connection from the company to the application cant be directly. You must first connect to the bastion, & then connect to the application server. The bastion server is on the same VPC that already routing there is no logic to connect via external IP while you are in the local VPC.