SAA-C02-certInfo Topic 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.What should the solutions architect recommend? A. Implement EC2 Spot Instances. B. Purchase EC2 Reserved Instances. C. Implement EC2 On-Demand Instances. D. Implement the processing on AWS Lambda.

A

A company has a hybrid application hosted on multiple on-premises servers with static IP addresses. There is already a VPN that provides connectivity between the VPC and the on-premises network. The company wants to distribute TCP traffic across the on-premises servers for internet users.What should a solutions architect recommend to provide a highly available and scalable solution? A. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB. B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB. C. Launch an Amazon EC2 instance, attach an Elastic IP address, and distribute traffic to the on-premises servers. D. Launch an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distribute traffic to the on-premises servers.

A

A company has an application that is hosted on Amazon EC2 instances in two private subnets. A solutions architect must make the application available on the public internet with the least amount of administrative effort.What should the solutions architect recommend? A. Create a load balancer and associate two public subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer. B. Create a load balancer and associate two private subnets from the same Availability Zones as the private instances. Add the private instances to the load balancer. C. Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two public subnets from the same Availability Zones as the public instances. D. Create an Amazon Machine Image (AMI) of the instances in the private subnet and restore in the public subnet. Create a load balancer and associate two private subnets from the same Availability Zones as the public instances.

A

A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in another AWS Region with minimal downtime.What should a solutions architect do to meet these requirements with the LEAST amount of downtime? A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be executed when needed. Configure DNS failover to point to the new disaster recovery Region's load balancer. C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be executed when needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger and AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer

A

A company is deploying a multi-instance application within AWS that requires minimal latency between the instances.What should a solutions architect recommend? A. Use an Auto Scaling group with a cluster placement group. B. Use an Auto Scaling group with single Availability Zone in the same AWS Region. C. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region. D. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets.

A

A company is designing a new service that will run on Amazon EC2 instance behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.What should a solution architect recommend to meet the clients' needs? A. A Network Load Balancer with an associated Elastic IP address. B. An Application Load Balancer with an associated Elastic IP address C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address D. An EC2 instance with a public IP address running as a proxy in front of the load balancer

A

A company is moving its on-premises applications to Amazon EC2 instances. However, as a result of fluctuating compute requirements, the EC2 instances must always be ready to use between 8 AM and 5 PM in specific Availability Zones.Which EC2 instances should the company choose to run the applications? A. Scheduled Reserved Instances B. On-Demand Instances C. Spot Instances as part of a Spot Fleet D. EC2 instances in an Auto Scaling group

A

A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure solution.What should a solutions architect do to secure the audit documents? A. Enable the versioning and MFA Delete features on the S3 bucket. B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates. D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.

A

An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.Which solution will allow least privilege access to the DynamoDB table from the EC2 instance? A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role. C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly. D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.

A

A company uses a legacy on-premises analytics application that operates on gigabytes of .csv files and represents months of data. The legacy application cannot handle the growing size of .csv files. New .csv files are added daily from various data sources to a central on-premises storage location. The company wants to continue to support the legacy application while users learn AWS analytics services. To achieve this, a solutions architect wants to maintain two synchronized copies of all the .csv files on-premises and in Amazon S3.Which solution should the solutions architect recommend? A. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the company's on-premises storage and the company's S3 bucket. B. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3. C. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3. D. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon EFS to the company's S3 bucket.

A The reason it is NOT B is because file gateway does not maintain two separate copies of files on premise and in Amazon S3. What file gateway does is stores the entirety of the data in S3 and caches commonly used files locally. File gateway makes S3 accessible via the NFS and SMB protocol, essentially creating an on-premise network file system but with data stored in the cloud. However the question wants two seperate copies of data to be stored in S3 and on premises. Datasync is perfect for this as it replicates data from NAS (central on-premises storage location) directly to S3. Answer is A AWS Storage Gateway vs AWS DataSync DataSync is actually used for MOVING data. The question is about moving data so Storage Gateway options are out. Just remember that DataSync is "move" and Storage Gateway is "extend, scale". One is for optimized data movement, and the other is more suitable for hybrid architecture. AWS DataSync is ideal for online data transfers. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. You can combine both services. Use AWS DataSync to migrate existing data to Amazon S3, and then use the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and ongoing updates from your on-premises file-based applications.

A company is building a website that relies on reading and writing to an Amazon DynamoDB database. The traffic associated with the website predictably peaks during business hours on weekdays and declines overnight and during weekends. A solutions architect needs to design a cost-effective solution that can handle the load.What should the solutions architect do to meet these requirements? A. Enable DynamoDB Accelerator (DAX) to cache the data. B. Enable Multi-AZ replication for the DynamoDB database. C. Enable DynamoDB auto scaling when creating the tables. D. Enable DynamoDB On-Demand capacity allocation when creating the tables.

A - DAX is a cache for read-intensive application, here the application is a read/write. B - multi az replication will increase the HA, but will not help in variable predictable load throughout the day. C - Autoscaling is the answer as per the documentation when the moring peak load is different from the load during the regular hour. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html D - On Demand is for unpredictable workload, but here the workload is predictable. But still will work. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html But as per the documentation we should go with C. Answer is C. The below link provides a cost comparison of DynamoDB on-demand vs auto scaling. Auto scaling is cheaper when it comes to predictable fluctuations. On demand is used when capacity planning is hard Or when there are no-ops benefits to the company. https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/

A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.What should a solutions architect propose to improve the performance of the workload? A. Choose a cluster placement group while launching Amazon EC2 instances. B. Choose dedicated instance tenancy while launching Amazon EC2 instances. C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances. D. Choose the required capacity reservation while launching Amazon EC2 instances.

A Link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html Summary: Cluster - packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.

A user wants to list the IAM role that is attached to their Amazon EC2 instance. The user has login access to the EC2 instance but does not have IAM permissions.What should a solutions architect do to retrieve this information? A. Run the following EC2 command: curl http://169.254.169.254/latest/meta-data/iam/info B. Run the following EC2 command: curl http://169.254.169.254/latest/user-data/iam/info C. Run the following EC2 command: http://169.254.169.254/latest/dynamic/instance-identity/ D. Run the following AWS CLI command: aws iam get-instance-profile --instance-profile-name ExampleInstanceProfile

A https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-ec2.html

A company is developing a mobile game that streams score updates to a backend processor and then posts results on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution.What should the solutions architect do to meet these requirements? A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2 instances set up for Auto Scaling. Store the processed updates in Amazon Redshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.

A is a winner - https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html D: FIFO requirement

A website runs a web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the custom application consistently takes 1 minute to initiate upon boot up before responding to user requests.How should a solutions architect redesign the architecture to better respond to changing traffic? A. Configure a Network Load Balancer with a slow start configuration. B. Configure AWS ElastiCache for Redis to offload direct requests to the servers. C. Configure an Auto Scaling step scaling policy with an instance warmup condition. D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.

A is out because we have HTTP/S traffic (it's a web application and then "slow start conf" doesn't exist for it) B is out because there's no caching, upload activity C&D remains Problem is clients receive timeouts so no answer from EC2 's and it happens when an EC2 is unresponsive for any reasons. D could be ok for the presence of the ALB that ensure the health-check action BUT, WHY cloudfront? Question doesn't ask for latency issues. C is ok because while warming up an EC2 instance, it's not considered as part of the current capacity of the group in the way to don not add more instances than needed. Warmup protects the application's availability, it ensures that the instance warmup period covers the expected startup time for your application, from when a new instance comes into service to when it can receive traffic.

A company runs a static website through its on-premises data center. The company has multiple servers that handle all of its traffic, but on busy days, services are interrupted and the website becomes unavailable. The company wants to expand its presence globally and plans to triple its website traffic.What should a solutions architect recommend to meet these requirements? A. Migrate the website content to Amazon S3 and host the website on Amazon CloudFront. B. Migrate the website content to Amazon EC2 instances with public Elastic IP addresses in multiple AWS Regions. C. Migrate the website content to Amazon EC2 instances and vertically scale as the load increases. D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for each AWS Region that exists globally.

A is possible based on this https://medium.com/@kyle.galbraith/how-to-host-a-website-on-s3-without-getting-lost-in-the-sea-e2b82aa6cd38 In fact A has the word migrate the static site But D doesn't. And D doesn't specify where to store the static content (EC2/S3?), how do you launch a site simply with Route 53 or CFD

A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.Which solutions meet these requirements? (Choose two.) A. Create an Amazon RDS DB instance in Multi-AZ mode. B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone. C. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load. E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.

A&D. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html 1. Relational database: RDS 2. Container-based applications: ECS "Amazon ECS enables you to launch and stop your container-based applications by using simple API calls. You can also retrieve the state of your cluster from a centralized service and have access to many familiar Amazon EC2 features." 3. Little manual intervention: Fargate You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage.

A company has a web server running on an Amazon EC2 instance in a public subnet with an Elastic IP address. The default security group is assigned to the EC2 instance. The default network ACL has been modified to block all traffic. A solutions architect needs to make the web server accessible from everywhere on port443.Which combination of steps will accomplish this task? (Choose two.) A. Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0. B. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0. C. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0. D. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0. E. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0/0.0.0/0.

A&E https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports In practice, to cover the different types of clients that might initiate traffic to public-facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you can also add rules to the ACL to deny traffic on any malicious ports within that range. Ensure that you place the deny rules earlier in the table than the allow rules that open the wide range of ephemeral ports.

A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on AWS in case the on-premises data center fails.The company runs web servers that connect to external vendors. The data available on AWS and on premises must be uniform.Which solution should a solutions architect recommend that has the LEAST amount of downtime? A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. B. Configure an Amazon Route 53 failover record. Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. C. Configure an Amazon Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on Amazon EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer. D. Configure an Amazon Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. Set up an AWS Direct Connect connection between a VPC and the data center.

A. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.How should security groups be configured in this situation? (Choose two.) A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0. B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0. C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier. D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier. E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

A/C

A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.Which solution meets these requirements? A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option. B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot. C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases. D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.

Agree. Answer should be A. YOU just need to do steps in A and AMAZON RDS will perform steps in B. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html A - Will cost downtime which is B - AWS recommended without downtime. C - Need to explore if read replica on the failure of component converts to the primary. But still B is a solution as per documentation for these kind of senario. D - ASG not sure in separate AZ and still can go down if AZ is down. It should be B Modifying a DB instance to be a Multi-AZ deployment If you have a DB instance in a Single-AZ deployment and modify it to a Multi-AZ deployment (for engines other than Amazon Aurora), Amazon RDS takes several steps. First, Amazon RDS takes a snapshot of the primary DB instance from your deployment and then restores the snapshot into another Availability Zone. Amazon RDS then sets up synchronous replication between your primary DB instance and the new instance. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html A is correct, for all those are saying its B as A may cause some downtime, its absolutely wrong. from RDS FAQs below the explanation- Q: What happens when I convert my RDS instance from Single-AZ to Multi-AZ? For the RDS for MySQL, MariaDB, PostgreSQL and Oracle database engines, when you elect to convert your RDS instance from Single-AZ to Multi-AZ, the following happens: A snapshot of your primary instance is taken A new standby instance is created in a different Availability Zone, from the snapshot Synchronous replication is configured between primary and standby instances As such, there should be no downtime incurred when an instance is converted from Single-AZ to Multi-AZ. However, you may see increased latency while the data on the standby is caught up to match to the primary.

A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a high-speed internet connection. The company's weather forecasting applications are based in a single Region and analyze the data daily.What is the FASTEST way to aggregate data for all of these global sites? A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket. B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. D. Upload the data to an Amazon EC2 instance in the closes Region. Store the data in an Amazon EBS volume. One a day take an EBS snapshot and copy it to the centralize Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.

Ans - A . Note that - Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. and muli part upload wil help it to be fastest. Why Use Amazon S3 Transfer Acceleration? -(https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html) You might want to use Transfer Acceleration on a bucket for various reasons, including the following: 1. You have customers that upload to a centralized bucket from all over the world. 2. You transfer gigabytes to terabytes of data on a regular basis across continents. 3. You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.

A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and the company can tolerate a delay in accessing those older backup files.What should a solutions architect do to meet these requirements with the LEAST operational effort? A. Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups. B. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway. C. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups. D. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.

Ans B is correct. File Gateway integrates On-Premises DB or Apps server using mounts share NFS or SMB to create interface to File Gateway to S3 to backup like .bak files in S3.

A company has media and application files that need to be shared internally. Users currently are authenticated using Active Directory and access files from aMicrosoft Windows platform. The chief executive officer wants to keep the same user permissions, but wants the company to improve the process as the company is reaching its storage capacity limit.What should a solutions architect recommend? A. Set up a corporate Amazon S3 bucket and move all media and application files. B. Configure Amazon FSx for Windows File Server and move all the media and application files. C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files. D. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS) volumes, and move all media and application files

B

A company has data stored in an on-premises data center that is used by several on-premises applications. The company wants to maintain its existing application environment and be able to use AWS services for data analytics and future visualizations.Which storage service should a solutions architect recommend? A. Amazon Redshift B. AWS Storage Gateway for files C. Amazon Elastic Block Store (Amazon EBS) D. Amazon Elastic File System (Amazon EFS)

Ans BBBB File Gateway is a great choice for various hybrid cloud workloads. For example if your company does a lot of big data analytics, but relies on both on-premise and the AWS cloud, File Gateway makes it easy for you to move the data to S3 and ingest it to something like EMR or Athena. The resulting data can be stored in S3 as well, which allows it to be visible to your on-premise applications—something which can be utilized further for business intelligence etc.

A company wants to use an AWS Region as a disaster recovery location for its on-premises infrastructure. The company has 10 TB of existing data, and the on- premise data center has a 1 Gbps internet connection. A solutions architect must find a solution so the company can have its existing data on AWS in 72 hours without transmitting it using an unencrypted channel.Which solution should the solutions architect select? A. Send the initial 10 TB of data to AWS using FTP. B. Send the initial 10 TB of data to AWS using AWS Snowball. C. Establish a VPN connection between Amazon VPC and the company's data center. D. Establish an AWS Direct Connect connection between Amazon VPC and the company's data center.

Ans C; Though direct-connect D: is the best choice when we request a port, It can take up to 72 hours for AWS to review your request and provision a port for your connection. Hence C. Snowball whole process takes a week. Not D: https://docs.aws.amazon.com/directconnect/latest/UserGuide/getting_started.html#CreateConnection Not B: https://aws.amazon.com/snowball/faqs/#:~:text=The%20end%2Dto%2Dend%20time,time%20in%20AWS%20data%20centers.

A company's website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to security concerns, the company requires a private and secure connection between its EC2 resources and Amazon S3.Which solution meets these requirements? A. Set up S3 bucket policies to allow access from a VPC endpoint. B. Set up an IAM policy to grant read-write access to the S3 bucket. C. Set up a NAT gateway to access resources outside the private subnet. D. Set up an access key ID and a secret access key to access the S3 bucket.

Ans: A https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html

A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.What should the solutions architect do to enable internet access for the private subnets? A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ. C. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway. D. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress- only internet gateway.

Ans: A https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

A company runs an application on an Amazon EC2 instance backed by Amazon Elastic Block Store (Amazon EBS). The instance needs to be available for 12 hours daily. The company wants to save costs by making the instance unavailable outside the window required for the application. However, the contents of the instance's memory must be preserved whenever the instance is unavailable.What should a solutions architect do to meet this requirement? A. Stop the instance outside the application's availability window. Start up the instance again when required. B. Hibernate the instance outside the application's availability window. Start up the instance again when required. C. Use Auto Scaling to scale down the instance outside the application's availability window. Scale up the instance when required. D. Terminate the instance outside the application's availability window. Launch the instance by using a preconfigured Amazon Machine Image (AMI) when required.

Ans: B https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html 1. Hibernate: To preserve contents of the instance's memory whenever the instance is unavailable. https://aws.amazon.com/blogs/aws/new-hibernate-your-ec2-instances/ 2. Cost consideration: While the instance is in hibernation, you pay only for the EBS volumes and Elastic IP Addresses attached to it; there are no other hourly charges (just like any other stopped instance).

A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.Which solution meets these requirements and is the MOST operationally efficient? A. Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively. B. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL). C. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process. D. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.

Ans: C https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/ https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html

A company must re-evaluate its need for the Amazon EC2 instances it currently has provisioned in an Auto Scaling group. At present, the Auto Scaling group is configured for a minimum of two instances and a maximum of four instances across two Availability Zones. A Solutions architect reviewed Amazon CloudWatch metrics and found that CPU utilization is consistently low for all the EC2 instances.What should the solutions architect recommend to maximize utilization while ensuring the application remains fault tolerant? A. Remove some EC2 instances to increase the utilization of remaining instances. B. Increase the Amazon Elastic Block Store (Amazon EBS) capacity of instances with less CPU utilization. C. Modify the Auto Scaling group scaling policy to scale in and out based on a higher CPU utilization metric. D. Create a new launch configuration that uses smaller instance types. Update the existing Auto Scaling group.

Answer : D. Tricky question, Already CPU is "low utilization". There is no point in ASG with scale in out policy using HIGH CPU utilization.

A company is building a document storage application on AWS. The application runs on Amazon EC2 instances in multiple Availability Zones. The company requires the document store to be highly available. The documents need to be returned immediately when requested. The lead engineer has configured the application to use Amazon Elastic Block Store (Amazon EBS) to store the documents, but is willing to consider other options to meet the availability requirement.What should a solutions architect recommend? A. Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones. B. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. C. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier. D. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.

B

A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.Which solution will meet these requirements? A. Amazon DynamoDB B. Amazon RDS for MySQL C. MySQL-compatible Amazon Aurora Serverless D. MySQL deployed on Amazon EC2 in an Auto Scaling group

Answer C Amazon Aurora Serverless is an on-demand, autoscaling configuration for the MySQL-compatible and PostgreSQL-compatible editions of Amazon Aurora. An Aurora Serverless DB cluster automatically starts up, shuts down, and scales capacity up or down based on your application's needs. Aurora Serverless provides a relatively simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. Read more in the Amazon Aurora User Guide.

A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses, which were purchased earlier this year.Which Amazon EC2 pricing option is the MOST cost-effective? A. Dedicated Reserved Hosts B. Dedicated On-Demand Hosts C. Dedicated Reserved Instances D. Dedicated On-Demand Instances

Answer is A. Whenever we see the requirement "Licensing with sockets/core" in the Q, we can choose 'Dedicated Reserved Hosts'

A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object storage. The chief information security officer has directed that no application traffic between the two services should traverse the public internet.Which capability should the solutions architect use to meet the compliance requirements? A. AWS Key Management Service (AWS KMS) B. VPC endpoint C. Private subnet D. Virtual private gateway

B

A company is building an application on Amazon EC2 instances that generates temporary transactional data. The application requires access to data storage that can provide configurable and consistent IOPS.What should a solutions architect recommend? A. Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume. B. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume. C. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume. D. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.

Answer is C. Please note that you cannot add HDD in root volume. SSD needs to be selected as root volume and HDD as Data Volume. Based on options given C is best answer

An online photo application lets users upload photos and perform image editing operations. The application offers two classes of service: free and paid. Photos submitted by paid users are processed before those submitted by free users. Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.Which configuration should a solutions architect recommend? A. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first. B. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling. C. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. D. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.

Answer is C. https://aws.amazon.com/sqs/features/

A company is developing a real-time multiplier game that uses UDP for communications between client and servers in an Auto Scaling group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer scores and other non-relational data in a database solution that will scale without intervention.Which solution should a solution architect recommend? A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage. B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on demand for data storage. C. Use a Network Load Balancer for traffic distribution and Amazon Aura Global for data storage. D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.

B

A company is hosting an election reporting website on AWS for users around the world. The website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database. The website is updated with election results once an hour and has historically observed hundreds of users accessing the reports.The company is expecting a significant increase in demand because of upcoming elections in different countries. A solutions architect must improve the website's ability to handle additional demand while minimizing the need for additional EC2 instances.Which solution will meet these requirements? A. Launch an Amazon ElastiCache cluster to cache common database queries. B. Launch an Amazon CloudFront web distribution to cache commonly requested website content. C. Enable disk-based caching on the EC2 instances to cache commonly requested website content. D. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content

B

A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.What should a solutions architect recommend? A. Deploy Amazon Inspector and associate it with the ALB. B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule. C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic. D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.

B

A company provides an API to its users that automates inquiries for tax computations based on item prices. The company experiences a larger number of inquiries during the holiday season only that cause slower response times. A solutions architect needs to design a solution that is scalable and elastic.What should the solutions architect do to accomplish this? A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations. C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.

B

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.Which storage solution meets these requirements? A. Amazon S3 Standard B. Amazon S3 Intelligent-Tiering C. Amazon S3 Glacier Deep Archive D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

B

A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes.Which solution meets these requirements? A. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones. B. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data. C. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data. D. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.

B

A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed within a few seconds after a request is made.Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost? A. An AWS Glue job B. An AWS Lambda function C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS) D. A containerized service hosted in Amazon ECS with Amazon EC2

B

A solutions architect needs to design a resilient solution for Windows users' home directories. The solution must provide fault tolerance, file-level backup and recovery, and access control, based upon the company's Active Directory.Which storage solution meets these requirements? A. Configure Amazon S3 to store the users' home directories. Join Amazon S3 to Active Directory. B. Configure a Multi-AZ file system with Amazon FSx for Windows File Server. Join Amazon FSx to Active Directory. C. Configure Amazon Elastic File System (Amazon EFS) for the users' home directories. Configure AWS Single Sign-On with Active Directory. D. Configure Amazon Elastic Block Store (Amazon EFS) to store the users' home directories. Configure AWS Single Sign-On with Active Directory.

B

A-company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.What should the solutions architect recommend to build the MOST high-performing solution? A. Use AWS Glue to process data and Amazon S3 to store data. B. Use Amazon EMR to process data and Amazon Redshift to store data. C. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data. D. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.

B # Glue: ETL # Redshift: SQL, structured data # Redshift spectrum: unstructured in S3 with no loading or ETL required

A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company's data science team wants to query ingested data in near-real time. Which solution provides near-real-time data querying that is scalable with minimal data loss? A. Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data. B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data. C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data. D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data

B Amazon Kinesis Data Streams: consumed by EC2, cannot result data loss issue if EC2 rebooted Kinesis Data Analytics: okay Amazon Kinesis Data Firehose: easiest way to load streaming data into data store and analytics toos. Amazon Redshift Redis: ElastiCache (two popular in memory data stores): + Redis + memcached Answer is B. Kinesis data streams consists of shards. The more througput is needed, the more shards you add, the less throughput, the more shards you remove, so it's scalable. Each shard can handle up to 1MB/s of writes. However Kinesis data streams stores ingested data for only 1 to 7 days so there is a chance of data loss. Additionally, Kinesis data analytics and kinesis data streams are both for real-time ingestion and analytics. Firehouse on the other hand is also scalable and processes data in near real time as per the requirement. It also transfers data into Redshift which is a data warehouse so data won't be lost. Redshift also has a SQL interface for performing queries for data analytics. This information was sourced from ultimate AWS certified solutions architect 2020 course with Stephane Maarek.

A company with facilities in North America, Europe, and Asia is designing new distributed application to optimize its global supply chain and manufacturing process. -> The orders booked on one continent should be visible to all Regions in a second or less. ->The database should be able to support failover with a shortRecovery Time Objective (RTO). The uptime of the application is important to ensure that manufacturing is not impacted. What should a solutions architect recommend? A. Use Amazon DynamoDB global tables. B. Use Amazon Aurora Global Database. C. Use Amazon RDS for MySQL with a cross-Region read replica. D. Use Amazon RDS for PostgreSQL with a cross-Region read replica.

B Sub-Second Data Access in Any Region latencies below 1 second & Recovery Time Objective (RTO) of less than 1 minute https://aws.amazon.com/rds/aurora/global-database/ There are 2 points, important in the question 1) write propagation 2) recovery should be very short. Eliminating C and D logically Aurora Global Db has less than second of point 1. Dynamo DB has millisecond The only difference is in recovery. There is no point mentioned in Dynamo Global Table for recovery its in Dynamo DB which has point in time recovery, not a recovery in seconds. But as Aurora Global spins secondary cluster its quickly in seconds promotes secondary to primary in case of primary failure. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html Comparing both I will go with B. Answer is A. https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tables-to-power-multiregion-architectures/ (see the active-active architecture) whereas in aurora, https://aws.amazon.com/rds/aurora/global-database/ "If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan." RTO: Recovery Time Objective (need to be short. 1 mins enough?) RPO: Recovery Point Objective Requirements: + Global supply chain (transaction DB) + write need to be available in sub-sec to all Dynamo Global Table: + RTO fast + write with sub-sec availability (few secs) -> Changes performed on the primary node are replicated to its replicas within a couple of seconds. This type of replication is called eventual consistency. When a system achieves eventual consistency, it has achieved replica convergence. Aurora Global DB: + RTO 1 min + write with sub-sec availability (okay) -> cross-region replication latencies below 1 second

A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experience highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.What should the solutions architect recommend? A. Install MySQL on Amazon EC2 in the secondary Region. B. Migrate the database to Amazon Aurora with cross-Region replicas. C. Create another RDS for MySQL read replica in the secondary. D. Implement Amazon ElastiCache to improve database query performance.

B - Aurora is a global Database that uses storage-based replication with typical latency of less than 1 second.

As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.Which solution meets these requirements? A. Run a query with Amazon Athena to generate the report. B. Create a report in Cost Explorer and download the report. C. Access the bill details from the billing dashboard and download the bill. D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).

B - https://aws.amazon.com/premiumsupport/knowledge-center/consolidated-linked-billing-report/

A company is building a media sharing application and decides to use Amazon S3 for storage. When a media file is uploaded, the company starts a multi-step process to create thumbnails, identify objects in the images, transcode videos into standard formats and resolutions, and extract and store the metadata to anAmazon DynamoDB table. The metadata is used for searching and navigation.The amount of traffic is variable. The solution must be able to scale to handle spikes in load without unnecessary expenses.What should a solutions architect recommend to support this workload? A. Build the processing into the website or mobile app used to upload the content to Amazon S3. Save the required data to the DynamoDB table when the objects are uploaded. B. Trigger AWS Step Functions when an object is stored in the S3 bucket. Have the Step Functions perform the steps needed to process the object and then write the metadata to the DynamoDB table. C. Trigger an AWS Lambda function when an object is stored in the S3 bucket. Have the Lambda function start AWS Batch to perform the steps to process the object. Place the object data in the DynamoDB table when complete. D. Trigger an AWS Lambda function to store an initial entry in the DynamoDB table when an object is uploaded to Amazon S3. Use a program running on an Amazon EC2 instance in an Auto Scaling group to poll the index for unprocessed items, and use the program to perform the processing.

B : https://aws.amazon.com/step-functions/use-cases/

A company is building applications in containers. The company wants to migrate its on-premises development and operations services from its on-premises data center to AWS. Management states that production system must be cloud agnostic and use the same configuration and administrator tools across production systems. A solutions architect needs to design a managed solution that will align open-source software.Which solution meets these requirements? A. Launch the containers on Amazon EC2 with EC2 instance worker nodes. B. Launch the containers on Amazon Elastic Kubernetes Service (Amazon EKS) and EKS workers nodes. C. Launch the containers on Amazon Elastic Containers service (Amazon ECS) with AWS Fargate instances. D. Launch the containers on Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 instance worker nodes.

B When talking about containerized applications, the leading technologies which will always come up during the conversation are Kubernetes and Amazon ECS (Elastic Container Service). While Kubernetes is an open-sourced container orchestration platform that was originally developed by Google, Amazon ECS is AWS' proprietary, managed container orchestration service.

A company serves a multilingual website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). This architecture is currently running in the us-west-1 Region but is exhibiting high request latency for users located in other parts of the world.The website needs to serve requests quickly and efficiently regardless of a user's location. However, the company does not want to recreate the existing architecture across multiple Regions.How should a solutions architect accomplish this? A. Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. B. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header. C. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache. D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.

B https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html Configuring caching based on the language of the viewer: If you want CloudFront to cache different versions of your objects based on the language specified in the request, configure CloudFront to forward the Accept-Language header to your origin.

A company hosts historical weather records in Amazon S3. The records are downloaded from the company's website by a way of a URL that resolves to a domain name. Users all over the world access this content through subscriptions. A third-party provider hosts the company's root domain name, but the company recently migrated some of its services to Amazon Route 53. The company wants to consolidate contracts, reduce latency for users, and reduce costs related to serving the application to subscribers.Which solution meets these requirements? A. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name. B. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name. C. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy. D. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.

B https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html

A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devices a strategy that maximizes security without increasing operational overhead.What should the solutions architect do to meet these requirements? A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance. B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway. C. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway. D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.

B https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html NAT gateways: You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days.Which solution should a solutions architect recommend? A. Set the backup retention period to 90 days when creating the RDS DB instance. B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days. C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily. D. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days

B is correct Amazon RDS Backup and Restore By default, Amazon RDS creates and saves automated backups of your DB instance securely in Amazon S3 for a user-specified retention period. In addition, you can create snapshots, which are user-initiated backups of your instance that are kept until you explicitly delete them. Ref: https://aws.amazon.com/rds/features/backup/#:~:text=By%20default%2C%20Amazon%20RDS%20creates,until%20you%20explicitly%20delete%20them. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html A and C - Snapshot has max retention from default 7 to 35 so A won't work. https://aws.amazon.com/rds/faqs/#:~:text=Amazon%20RDS%20retains%20backups%20of,to%20the%20Latest%20Restorable%20Time. B - You can automate the snapshot taking but copying or exporting to S3 or RDS automatically cannot move or export snapshot to S3. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html D - Only way this solution will work. Solution is D B is correct Amazon RDS Backup and Restore By default, Amazon RDS creates and saves automated backups of your DB instance securely in Amazon S3 for a user-specified retention period. In addition, you can create snapshots, which are user-initiated backups of your instance that are kept until you explicitly delete them. Ref: https://aws.amazon.com/rds/features/backup/#:~:text=By%20default%2C%20Amazon%20RDS%20creates,until%20you%20explicitly%20delete%20them.

A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are processed to extract metadata, which takes less than5 seconds. The volume and frequency of the uploads varies from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to design a cost-effective architecture that will meet these requirements.What should the solutions architect recommend? A. Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files. B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded to Amazon S3. Invoke an AWS Lambda function to process the files.

B is the correct answer https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-event-notifications.html

An application is running on an Amazon EC2 instance and must have millisecond latency when running the workload. The application makes many small reads and writes to the file system, but the file system itself is small.Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect attach to their EC2 instance? A. Cold HDD (sc1) B. General Purpose SSD (gp2) C. Provisioned IOPS SSD (io1) D. Throughput Optimized HDD (st1)

B is the correct answer. 2 Keywords -- Filesize is small and millisecond latency. (not sub-millisecond as in IOPS SSD) https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#EBSVolumeTypes_gp2

A company has a live chat application running on its on-premises servers that use WebSockets. The company wants to migrate the application to AWS.Application traffic is inconsistent, and the company expects there to be more traffic with sharp spikes in the future.The company wants a highly scalable solution with no server maintenance nor advanced capacity planning.Which solution meets these requirements? A. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity. B. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. C. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. D. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.

B sounds about right. https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/

A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic Container Service (Amazon ECS) task types running on Amazon EC2 instances that are part of an ECS cluster. The output and state data for all tasks needs to be stored. The amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time. The system should be optimized for high-frequency reading and writing. As old outputs are archived and deleted, the storage size is not expected to exceed 1 TB. Which storage solution should the solutions architect recommend? A. An Amazon DynamoDB table accessible by all ECS cluster instances. B. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode. C. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode. D. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.

B. https://docs.aws.amazon.com/efs/latest/ug/performance.html Throughput modes There are two throughput modes to choose from for your file system, Bursting Throughput and Provisioned Throughput. With Bursting Throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. For more information about EFS storage classes, see EFS storage classes. With Provisioned Throughput mode, you can instantly provision the throughput of your file system (in MiB/s) independent of the amount of data stored.

A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future.Which service should a solutions architect recommend? A. Amazon Aurora MySQL B. Amazon Aurora Serverless for MySQL C. Amazon Redshift Spectrum D. Amazon RDS for MySQL

B. "A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future" Serverless sounds right, and it's compatible with MySQL and PostgreSQL.

A company has an image processing workload running on Amazon Elastic Container Service (Amazon ECS) in two private subnets. Each private subnet uses aNAT instance for internet access. All images are stored in Amazon S3 buckets. The company is concerned about the data transfer costs between Amazon ECS and Amazon S3.What should a solutions architect do to reduce costs? A. Configure a NAT gateway to replace the NAT instances. B. Configure a gateway endpoint for traffic destined to Amazon S3. C. Configure an interface endpoint for traffic destined to Amazon S3. D. Configure Amazon CloudFront for the S3 bucket storing the images.

B. Configure a gateway endpoint https://aws.amazon.com/vpc/pricing/ There is no data processing or hourly charges for using Gateway Type VPC endpoints. B. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html

A solutions architect is designing a solution that requires frequent updates to a website that is hosted on Amazon S3 with versioning enabled. For compliance reasons, the older versions of the objects will not be accessed frequently and will need to be deleted after 2 years.What should the solutions architect recommend to meet these requirements at the LOWEST cost? A. Use S3 batch operations to replace object tags. Expire the objects based on the modified tags. B. Configure an S3 Lifecycle policy to transition older versions of objects to S3 Glacier. Expire the objects after 2 years. C. Enable S3 Event Notifications on the bucket that sends older objects to the Amazon Simple Queue Service (Amazon SQS) queue for further processing. D. Replicate older object versions to a new bucket. Use an S3 Lifecycle policy to expire the objects in the new bucket after 2 years.

B. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Object lifecycle management: 1. Transition actions—Define when objects transition to another storage class. 2. Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on

A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime.Which solution meets these requirements MOST cost-effectively? A. Use Spot Instances exclusively to handle the maximum capacity required. B. Use Reserved Instances exclusively to handle the maximum capacity required. C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity. Reveal Solution Discussion 42

Big reason for D, The backend application is not stateless, it's stateful and run on EC2 instances. Question says it cannot be interrupted, so spot cannot work since it can be shutdown, even if another instance exists at that time, the workload would be lost.

A company has been storing analytics data in an Amazon RDS instance for the past few years. The company asked a solutions architect to find a solution that allows users to access this data using an API. The expectation is that the application will experience periods of inactivity but could receive bursts of traffic within seconds.Which solution should the solutions architect suggest? A. Set up an Amazon API Gateway and use Amazon ECS. B. Set up an Amazon API Gateway and use AWS Elastic Beanstalk. C. Set up an Amazon API Gateway and use AWS Lambda functions. D. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.

C

A company hosts a training site on a fleet of Amazon EC2 instances. The company anticipates that its new course, which consists of dozens of training videos on the site, will be extremely popular when it is released in 1 week.What should a solutions architect do to minimize the anticipated server load? A. Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API. B. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume. C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. D. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.

C

A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company's application. A solutions architect wants to implement a solution that is highly available fault tolerant, and automatically scalable.What should the solutions architect recommend? A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone. B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones. C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones. D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.

C

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.What should a solutions architect propose to ensure users see all of their documents at once? A. Copy the data so both EBS volumes contain all the documents. B. Configure the Application Load Balancer to direct a user to the server with the documents. C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS. D. Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server.

C

A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.What should a solutions architect recommend? A. Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance. B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones. C. Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet. D. Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones.

C

A company's web application is using multiple Linux Amazon EC2 instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID).What should a solutions architect do to meet these requirements? A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance. B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance. C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance. D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

C

A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load balancer allowing port 443 from0.0.0.0/0. Company policy requires that each resource has the least access required to still be able to perform its tasks.Which additional configuration strategy should the solutions architect use to meet these requirements? A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port 3306 from the web servers security group. B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group. C. Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow port 3306 from the web servers security group. D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL for the MySQL servers and allow port 3306 from the web servers security group.

C

A solutions architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user's on-premises network-attached storage (NAS). The solutions architected has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.Which replacement to the on-premises file share is MOST resilient and durable? A. Migrate the file Share to Amazon RDS. B. Migrate the file Share to AWS Storage Gateway C. Migrate the file Share to Amazon FSx for Windows File Server. D. Migrate the file share to Amazon Elastic File System (Amazon EFS)

C

An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database.What should the solutions architect do to separate the read requests from the write requests? A. Enable read-through caching on the Amazon Aurora database. B. Update the application to read from the Multi-AZ standby instance. C. Create a read replica and modify the application to use the appropriate endpoint. D. Create a second Amazon Aurora database and link it to the primary database as a read replica.

C

A software vendor is deploying a new software-as-a-service (SaaS) solution that will be utilized by many AWS users. The service is hosted in a VPC behind aNetwork Load Balancer. The software vendor wants to provide access to this service to users with the least amount of administrative overhead and without exposing the service to the public internet.What should a solutions architect do to accomplish this goal? A. Create a peering VPC connection from each user's VPC to the software vendor's VPC. B. Deploy a transit VPC in the software vendor's AWS account. Create a VPN connection with each user account. C. Connect the service in the VPC with an AWS Private Link endpoint. Have users subscribe to the endpoint. D. Deploy a transit VPC in the software vendor's AWS account. Create an AWS Direct Connect connection with each user account.

C Ans: C https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html VPC endpoint services (AWS PrivateLink) 1. You can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). 2. Other AWS principals can create a connection from their VPC to your endpoint service using an interface VPC endpoint or a Gateway Load Balancer endpoint, depending on the type of service. 3. You are the service provider, and the AWS principals that create connections to your service are service consumers.

A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs receive inconsistent traffic that can spike and drop throughout the year.The company's networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity.Which solution is the MOST scalable with minimal configuration changes? A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewall's rule to allow the IP addresses of the ALBs. B. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewall's rule to allow the Elastic IP addresses of all the NLBs. C. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall's rule to allow static IP addresses associated with the accelerator. D. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on- premises firewall's rule to allow the Elastic IP address attached to the NLB.

C C is correct. AWS Global Accelerator improves the availability and performance of applications designed to reach a global user base. A single accelerator can support multiple Application Load Balancers, Network Load Balancers, and Amazon EC2 instances running in multiple AWS Regions. Global Accelerator provides you with static IP addresses that are advertised globally, supports both TCP and UDP traffic, and routes your user traffic to the optimal AWS Region.

A company has a 10 Gbps AWS Direct Connect connection from its on-premises servers to AWS. The workloads using the connection are critical. The company requires a disaster recovery strategy with maximum resiliency that maintains the current connection bandwidth at a minimum.What should a solutions architect recommend? A. Set up a new Direct Connect connection in another AWS Region. B. Set up a new AWS managed VPN connection in another AWS Region. C. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region. D. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.

C Correct. More is always better if money is not a problem. Two links to one region and another link to another region. Protects from device failure, link failure and location failure. See section "Maximum Resiliency for Critical Workloads" from https://aws.amazon.com/directconnect/resiliency-recommendation/ answer is C => maximum Resiliency for Critical Workloads VPN Solution are wrong => we do not recommend customers use AWS Managed VPN as a backup for AWS Direct Connect connections with speeds greater than 1 Gbps. SOURCE : https://aws.amazon.com/directconnect/resiliency-recommendation/

A company is using a third-party vendor to manage its marketplace analytics. The vendor needs limited programmatic access to resources in the company's account. All the needed policies have been created to grant appropriate access.Which additional component will provide the vendor with the MOST secure access to the account? A. Create an IAM user. B. Implement a service control policy (SCP) C. Use a cross-account role with an external ID. D. Configure a single sign-on (SSO) identity provider.

C for sure https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html

The financial application at a company stores monthly reports in an Amazon S3 bucket. The vice president of finance has mandated that all access to these reports be logged and that any modifications to the log files be detected.Which actions can a solutions architect take to meet these requirements? A. Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled. B. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled. C. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation. D. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.

C https://aws.amazon.com/premiumsupport/knowledge-center/cloudtrail-data-management-events/ https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html Data events: Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities. The following data types are recorded: 1. Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations) 2. AWS Lambda function execution activity (the Invoke API) Management Events - Tracks management operations. Turned on by default. Can't be turned off Data Events. Tracks Specific operations for specific AWS Services. Tuned off by default. The two services that can be tracked in S3 and Lambda. Due to Tracking Specific Services (S3) I would go C

A company has multiple AWS accounts, for various departments. One of the departments wants to share an Amazon S3 bucket with all other department.Which solution will require the LEAST amount of effort? A. Enable cross-account S3 replication for the bucket. B. Create a pre-signed URL for the bucket and share it with other departments. C. Set the S3 bucket policy to allow cross-account access to other departments. D. Create IAM users for each of the departments and configure a read-only IAM policy.

C is answer - https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/

A company needs to share an Amazon S3 bucket with an external vendor. The bucket owner must be able to access all objects.Which action should be taken to share the S3 bucket? A. Update the bucket to be a Requester Pays bucket. B. Update the bucket to enable cross-origin resource sharing (CORS). C. Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects. D. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.

C is correct. https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-bucket-owner-access/ By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. To get access to the object, the object owner must explicitly grant you (the bucket owner) access. The object owner can grant the bucket owner full control of the object by updating the access control list (ACL) of the object. The object owner can update the ACL either during a put or copy operation, or after the object is added to the bucket. Similar: https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-require-object-ownership/ Resolution Add a bucket policy that grants users access to put objects in your bucket only when they grant you (the bucket owner) full control of the object.

A company is deploying a web portal. The company wants to ensure that only the web portion of the application is publicly accessible. To accomplish this, theVPC was designed with two public subnets and two private subnets. The application will run on several Amazon EC2 instances in an Auto Scaling group. SSL termination must be offloaded from the EC2 instances.What should a solutions architect do to ensure these requirements are met? A. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. B. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer. C. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. D. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.

C, https://aws.amazon.com/elasticloadbalancing/application-load-balancer/ (Read under header "TLS Offloading")

A solutions architect is designing the cloud architecture for a company that needs to host hundreds of machine learning models for its users. During startup, the models need to load up to 10 GB of data from Amazon S3 into memory, but they do not need disk access. Most of the models are used sporadically, but the users expect all of them to be highly available and accessible with low latency.Which solution meets the requirements and is MOST cost-effective? A. Deploy models as AWS Lambda functions behind an Amazon API Gateway for each model. B. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer for each model. C. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model. D. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind a single Application Load Balancer with path-based routing where one path corresponds to each model.

C. AWS just update Lambda to support 10G memory and helping compute intensive applications like machine learning... No disk access, lowest cost Posted On: Dec 1, 2020 https://aws.amazon.com/about-aws/whats-new/2020/12/aws-lambda-supports-10gb-memory-6-vcpu-cores-lambda-functions/#:~:text=Customer%20Enablement-,AWS%20Lambda%20now%20supports%20up%20to%2010%20GB%20of%20memory,vCPU%20cores%20for%20Lambda%20Functions&text=AWS%20Lambda%20customers%20can%20now,previous%20limit%20of%203%2C008%20MB. P/s: ELB has limit of 100 target groups

A company has an ecommerce application that stores data in an on-premises SQL database. The company has decided to migrate this database to AWS.However, as part of the migration, the company wants to find a way to attain sub-millisecond responses to common read requests.A solutions architect knows that the increase in speed is paramount and that a small percentage of stale data returned in the database reads is acceptable.What should the solutions architect recommend? A. Build Amazon RDS read replicas. B. Build the database as a larger instance type. C. Build a database cache using Amazon ElastiCache. D. Build a database cache using Amazon Elasticsearch Service (Amazon ES).

C. common read requests: cache A is wrong: submillisecond : Lasting less than a millisecond. But all Aurora Replicas return the same data for query results with minimal replica lag. This lag is usually much less than 100 milliseconds after the primary instance has written an update. Replica lag varies depending on the rate of database change. That is, during periods where a large amount of write operations occur for the database, you might see an increase in replica lag. D is for: Application monitoring Security information and event management (SIEM) Search Infrastructure monitoring

A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because the individual developers will have AWS account root user-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer accounts is not modified.Which action meets these requirements? A. Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user. B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled. C. Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts. D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account.

C. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html

A company is experiencing growth as demand for its product has increased. The company's existing purchasing application is slow when traffic spikes. The application is a monolithic three-tier application that uses synchronous transactions and sometimes sees bottlenecks in the application tier. A solutions architect needs to design a solution that can meet required application response times while accounting for traffic volume spikes. Which solution will meet these requirements? A. Vertically scale the application instance using a larger Amazon EC2 instance size. B. Scale the application's persistence layer horizontally by introducing Oracle RAC on AWS. C. Scale the web and application tiers horizontally using Auto Scaling groups and an Application Load Balancer. D. Decouple the application and data tiers using Amazon Simple Queue Service (Amazon SQS) with asynchronous AWS Lambda calls.

Correct Ans: C Justification: Given key words: (a) "slow when traffic spikes", (b) "sometimes sees bottlenecks in the application tier" ASK: A solution that can meet required application response times while accounting for traffic volume spikes. This is "purely" performance problem. Horizontal scaling with 'auto scaling' is right choice.

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.What should a solutions architect do to accomplish this? A. Use AWS Config rules to define and detect resources that are not properly tagged. B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually. C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance. D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.

Correct Answer: AReference:https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf

A company is designing a website that uses an Amazon S3 bucket to store static images. The company wants all future requests to have faster response times while reducing both latency and cost.Which service configuration should a solutions architect recommend? A. Deploy a NAT server in front of Amazon S3. B. Deploy Amazon CloudFront in front of Amazon S3. C. Deploy a Network Load Balancer in front of Amazon S3. D. Configure Auto Scaling to automatically adjust the capacity of the website.

Correct Answer: BReference:https://aws.amazon.com/getting-started/hands-on/deliver-content-faster/

A company operates a website on Amazon EC2 Linux instances. Some of the instances are failing. Troubleshooting points to insufficient swap space on the failed instances. The operations team lead needs a solution to monitor this.What should a solutions architect recommend? A. Configure an Amazon CloudWatch Swap Usage metric dimension. Monitor the Swap Usage dimension in the EC2 metrics in CloudWatch. B. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor Swap Usage metrics in CloudWatch. C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor Swap Utilization metrics in CloudWatch. D. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch Swap Utilization custom metric. Monitor Swap Utilization metrics in CloudWatch.

Correct is A. Configure an Amazon CloudWatch Swap Usage metric dimension. Monitor the Swap Usage dimension in the EC2 metrics in CloudWatch. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/metrics-collected-by-CloudWatch-agent.html

A company has a 143 TB MySQL database that it wants to migrate to AWS. The plan is to use Amazon Aurora MySQL as the platform going forward. The company has a 100 Mbps AWS Direct Connect connection to Amazon VPC.Which solution meets the company's needs and takes the LEAST amount of time? A. Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora. B. Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora. C. Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora. D. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.

D

A company has a build server that is in an Auto Scaling group and often has multiple Linux instances running. The build server requires consistent and mountable shared NFS storage for jobs and configurations.Which storage option should a solutions architect recommend? A. Amazon S3 B. Amazon FSx C. Amazon Elastic Block Store (Amazon EBS) D. Amazon Elastic File System (Amazon EFS)

D

A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with ActiveDirectory for access control.Which solution will satisfy these requirements? A. Configure Amazon EFS storage and set the Active Directory domain for authentication. B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones. C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume. D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.

D

A company has an application that posts messages to Amazon SQS. Another application polls the queue and processes the messages in an I/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages the company has difficulty meeting its SLA consistently.What should a solutions architect do to help improve the application's processing time and ensure it can handle the load at any level? A. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size. B. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance. C. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep us aggregate CPU utilization below 70%. D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.

D

A company hosts its core network services, including directory services and DNS, in its on promises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead? A. Create a DX connection in each new account. Route the network traffic to the on-premises servers. B. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers. C. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. D. Configure AWS Transit Gateway between the accounts. Assigns DX to the transit gateway and route network traffic to the on-premises servers.

D

A company hosts its website on AWS. To address the highly variable demand, the company has implemented Amazon EC2 Auto Scaling. Management is concerned that the company is over-provisioning its infrastructure, especially at the front end of the three-tier application. A solutions architect needs to ensure costs are optimized without impacting performance.What should the solutions architect do to accomplish this? A. Use Auto Scaling with Reserved Instances. B. Use Auto Scaling with a scheduled scaling policy. C. Use Auto Scaling with the suspend-resume feature D. Use Auto Scaling with a target tracking scaling policy.

D

A company is running a three-tier web application to process credit card payments. The front-end user interface consists of static webpages. The application tier can have long-running processes. The database tier uses MySQL.The application is currently running on a single, general purpose large Amazon EC2 instance. A solutions architect needs to decouple the services to make the web application highly available.Which solution would provide the HIGHEST availability? A. Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ. B. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group. C. Move static assets to Amazon S3, Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on- demand enabled. D. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled, Move the database to Amazon RDS to deploy Multi-AZ.

D

A company must generate sales reports at the beginning of every month. The reporting process launches 20 Amazon EC2 instances on the first of the month. The process runs for 7 days and cannot be interrupted. The company wants to minimize costs.Which pricing model should the company choose? A. Reserved Instances B. Spot Block Instances C. On-Demand Instances D. Scheduled Reserved Instances

D

A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size. Customers can download their statements from the website for up to 30 days from when the statements were generated. At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements.What is the MOST cost-effective storage solution for this situation? A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.

D

A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office Amazon S3 Glacier. The solution must avoid saturating the branch office's low-bandwidth internet connection.What is the MOST cost-effective solution? A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Create a bucket VPC endpoint. B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce VPC endpoint. C. Mount the network-attached file system to Amazon S3 and copy the files directly. Create a lifecycle policy to S3 objects to Amazon S3 Glacier. D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

D

A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.What is the MOST cost-effective method to establish this type of connection? A. Implement a client VPN. B. Implement AWS Direct Connect. C. Implement a bastion host on Amazon EC2. D. Implement an AWS Site-to-Site VPN connection.

D AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). AWS Client VPN enables you to securely connect users to AWS or on-premises networks. The difference between them is simple: Client-to-Site VPN is characterized by single user connections. In contrast, Site-to-Site VPNs deal with remote connections between entire networks A. Incorrect - AWS Client VPN is an AWS managed high availability and scalability service enabling secure software remote access. It provides the option of creating a secure TLS connection between remote clients and your Amazon VPCs, to securely access AWS resources and on-premises over the internet, Refer - https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-client-vpn.html D is correct here AWS site to site VPN is applicable to make the line secure and to allow access to all AWS/Network resources

A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office Amazon S3 Glacier. The solution must avoid saturating the branch office's low-bandwidth internet connection.What is the MOST cost-effective solution? A. Create a site-to-site VPN tunnel to an Amazon S3 bucket and transfer the files directly. Create a bucket VPC endpoint. B. Order 10 AWS Snowball appliances and select an S3 Glacier vault as the destination. Create a bucket policy to enforce VPC endpoint. C. Mount the network-attached file system to Amazon S3 and copy the files directly. Create a lifecycle policy to S3 objects to Amazon S3 Glacier. D. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier.

D Snowball to move the file to S3 and then a lifecycle policy to move them to Glacier, as you can't upload the files directly to Glacier

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data. Which combination of storage and caching should the solutions architect use? A. Amazon S3 with Amazon CloudFront B. Amazon S3 Glacier with Amazon ElastiCache C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront D. AWS Storage Gateway with Amazon ElastiCache

D - All infra should be in AWS B - Has retrieval time delay C - EBS can hold up to 16TB and also not cost optimized compare to S3. So, my answer is A.

A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices.The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The company wants to decouple the solution and increase scalability.Which solution meets these requirements? A. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages. B. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics. C. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages. D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.

D SQS: Unlimited Throughput: Standard queues support a nearly unlimited number of transactions per second (TPS) per API action. kinesis: A single shard can ingest up to 1 MB of data per second (including partition keys) or 1,000 records per second for writes.

A company wants to share forensic accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has its own AWS account and requires its own copy of the database.How should the company securely share the database with the auditor? A. Create a read replica of the database and configure IAM standard database authentication to grant the auditor access. B. Copy a snapshot of the database to Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket. C. Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket. D. Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key.

D as per url: https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/

A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its launch. However, the company wants to reduce costs when utilization decreases.What should a solutions architect recommend? A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns. B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm. C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm. D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm

D https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html Amazon ECS Service Auto Scaling supports the following types of automatic scaling: 1. Target Tracking Scaling Policies 2. Step Scaling Policies 3. Scheduled Scaling

A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solutions architect needs to find a scalable and secure solution. What should the solutions architect recommend? A. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center. B. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center. C. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center. D. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.

D is correct. Transit Gateway combined with DX will meet the requirements : - scalable and secure solution For "prevent any resources sharing", we can attach a VPC network to Transit Gateway to specify one subnet to be used by the transit gateway to route traffic. You can read the "Limit" part at the link below, to see how Transit Gateway prevent resources sharing Limits When you attach a VPC to a transit gateway, resources in Availability Zones where there is no transit gateway attachment cannot reach the transit gateway. If there is a route to the transit gateway in a subnet route table, traffic is only forwarded to the transit gateway when the transit gateway has an attachment in a subnet in the same Availability Zone. The resources in a VPC attached to a transit gateway cannot access the security groups of a different VPC that is also attached to the same transit gateway. https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpc-attachments.html

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted? A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set. B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private. C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true. D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

D is good. https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

A company has developed a microservices application. It uses a client-facing API with Amazon API Gateway and multiple internal services hosted on AmazonEC2 instances to process user requests. The API is designed to support unpredictable surges in traffic, but internal services may become overwhelmed and unresponsive for a period of time during surges. A solutions architect needs to design a more reliable solution that reduces errors when internal services become unresponsive or unavailable.Which solution meets these requirements? A. Use AWS Auto Scaling to scale up internal services when there is a surge in traffic. B. Use different Availability Zones to host internal services. Send a notification to a system administrator when an internal service becomes unresponsive. C. Use an Elastic Load Balancer to distribute the traffic between internal services. Configure Amazon CloudWatch metrics to monitor traffic to internal services. D. Use Amazon Simple Queue Service (Amazon SQS) to store user requests as they arrive. Change the internal services to retrieve the requests from the queue for processing.

D. As it decouples the API gateway from any underlying computing. ASG on it's own can't resolve it has to be combined with a ALB and even after that D is better answer as it can deal with the scenario when all EC2 is down for fraction of sec/min.

A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems.Which design should a solutions architect recommend? A. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume. B. Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume. C. Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets. D. Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.

D. Classic fan out configuration https://aws.amazon.com/blogs/compute/messaging-fanout-pattern-for-serverless-architectures-using-amazon-sns/

A company owns an asynchronous API that is used to ingest user requests and, based on the request type, dispatch requests to the appropriate microservice for processing. The company is using Amazon API Gateway to deploy the API front end, and an AWS Lambda function that invokes Amazon DynamoDB to store user requests before dispatching them to the processing microservices.The company provisioned as much DynamoDB throughput as its budget allows, but the company is still experiencing availability issues and is losing user requests.What should a solutions architect do to address this issue without impacting existing users? A. Add throttling on the API Gateway with server-side throttling limits. B. Use DynamoDB Accelerator (DAX) and Lambda to buffer writes to DynamoDB. C. Create a secondary index in DynamoDB for the table with the user requests. D. Use the Amazon Simple Queue Service (Amazon SQS) queue and Lambda to buffer writes to DynamoDB.

D. From what I know throttling may impact some of the users as API gateway will throw an error if it exceeds allowable traffic

A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred.Which solution meets these requirements? A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data. B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-premises systems with local access to the data. C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data. D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.

D. Use AWS Storage Gateway and configure a stored volume gateway Can not use AWS Snowball because of "maintaining local access to all the data" Keywords: store all your data locally --- Volume Gateway: The gateway supports the following volume configurations: Cached volumes - You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. Stored volumes - If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon Elastic Compute Cloud (Amazon EC2). For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.

A company is hosting 60 TB of production-level data in an Amazon S3 bucket. A solution architect needs to bring that data on premises for quarterly audit requirements. This export of data must be encrypted while in transit. The company has low network bandwidth in place between AWS and its on-premises data center.What should the solutions architect do to meet these requirements? A. Deploy AWS Migration Hub with 90-day replication windows for data transfer. B. Deploy an AWS Storage Gateway volume gateway on AWS. Enable a 90-day replication window to transfer the data. C. Deploy Amazon Elastic File System (Amazon EFS), with lifecycle policies enabled, on AWS. Use it to transfer the data. D. Deploy an AWS Snowball device in the on-premises data center after completing an export job request in the AWS Snowball console.

D: Data can be exported from Amazon S3 with Snowball as well https://docs.aws.amazon.com/snowball/latest/ug/create-export-job-steps.html - This is good business case as low bandwidth connection Migration hub is for complete application portfolio migration

A company is hosting its static website in an Amazon S3 bucket, which is the origin for Amazon CloudFront. The company has users in the United States, Canada, and Europe and wants to reduce costs.What should a solutions architect recommend? A. Adjust the CloudFront caching time to live (TTL) from the default to a longer timeframe. B. Implement CloudFront events with Lambda@Edge to run the website's data processing. C. Modify the CloudFront price class to include only the locations of the countries that are served. D. Implement a CloudFront Secure Sockets Layer (SSL) certificate to push security closer to the locations of the countries that are served.

Definitely C - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html C is correct . CloudFront has edge locations all over the world. Our cost for each edge location varies and, as a result, the price that we charge you varies depending on the edge location from which CloudFront serves your requests. CloudFront edge locations are grouped into geographic regions, and we've grouped regions into price classes. The default price class includes all regions. Another price class includes most regions (the United States; Canada; Europe; Hong Kong, Philippines, South Korea, Taiwan, and Singapore; Japan; India; South Africa; and Middle East regions) but excludes the most expensive regions. A third price class includes only the least expensive regions (the United States, Canada, and Europe regions).

A company has copied 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-west-2 Region. The colocation facility does not allow the use of AWS Snowball.What should a solutions architect recommend to accomplish this? A. Order a Snowball Edge device to copy the data from one Region to another Region. B. Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console. C. Use the aws S3 sync command to copy data from the source bucket to the destination bucket. D. Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.

Final answer is option C: "Use the aws S3 sync command to copy data from the source bucket to the destination bucket." See the following link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-migrate-region/#:~:text=You%20can't%20migrate%20an,before%20creating%20the%20new%20bucket. The answer must be C https://aws.amazon.com/premiumsupport/knowledge-center/s3-improve-transfer-sync-command/ aws s3 sync s3://source-AWSDOC-EXAMPLE-BUCKET/folder1 s3://destination-AWSDOC-EXAMPLE-BUCKET/folder1 aws s3 sync s3://source-AWSDOC-EXAMPLE-BUCKET/folder2 s3://destination-AWSDOC-EXAMPLE-BUCKET/folder2 It Cannot be D since CRR only copies NEW or MODIFIED objects. Data is already in S3. Adding CRR to the bucket will only sync new objects.

A company recently released a new type of internet-connected sensor. The company is expecting to sell thousands of sensors, which are designed to stream high volumes of data each second to a central location. A solutions architect must design a solution that ingests and stores data so that engineering teams can analyze it in near-real time with millisecond responsiveness.Which solution should the solutions architect recommend? A. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. B. Use an Amazon SOS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB. C. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. D. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.

Here is my take in this: We have a choice between C and D. C is wrong because the data is NOT put or be stored into Redshift. Redshift reads data from S3. D is better answer because Lambda function can populate DynamoDB table. Is that right? Read this: https://docs.amazonaws.cn/en_us/redshift/latest/dg/t_Loading_data.html https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/using-lambda-ddb-setup.html D, found it in page 7. https://d12vzecr6ihe4p.cloudfront.net/media/965922/wp-an-introduction-to-amazon-redshirt.pdf Amazon Redshift is very fast (returning results in seconds or minutes), but it's never going to be as fast as a NoSQL solution (returning results in milliseconds) that was designed specifically for this purpose.

A company uses Amazon Redshift for its data warehouse. The company wants to ensure high durability for its data in case of any component failure.What should a solutions architect recommend? A. Enable concurrency scaling. B. Enable cross-Region snapshots. C. Increase the data retention period. D. Deploy Amazon Redshift in Multi-AZ.

Here; Concurrently scaling means making available virtually unlimited user queries which will boost performance. Cross region replication snapshot will ensure backup which can be used in a component failure There are snapshots retention but not data retention Redshift can't be deployed multi AZ however below, you have to enable relocation and you can run two clusters 2 AZs Q: What happens to my data warehouse cluster availability and data durability if my data warehouse cluster's Availability Zone (AZ) has an outage? If your Amazon Redshift data warehouse cluster's Availability Zone becomes unavailable, Amazon Redshift will automatically move your cluster to another AWS Availability Zone (AZ) without any data loss or application changes. To activate this, you must enable the relocation capability in your cluster configuration settings. https://aws.amazon.com/redshift/faqs/

A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.Which solution meets these requirements and is the MOST operationally efficient? A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services. B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server's peak utilization during the performance failures. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements. C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required. D. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling group. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.

I believe D is correct , "That's because the number of messages in your SQS queue does not solely define the number of instances needed. The number of instances in your Auto Scaling group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency (queue delay). " --> need Cloud Watch, please look a reference below https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html

A company has a custom application running on an Amazon EC instance that: "¢ Reads a large amount of data from Amazon S3 "¢ Performs a multi-stage analysis "¢ Writes the results to Amazon DynamoDBThe application writes a significant number of large temporary files during the multi-stage analysis. The process performance depends on the temporary storage performance.What would be the fastest storage option for holding the temporary files? A. Multiple Amazon S3 buckets with Transfer Acceleration for storage. B. Multiple Amazon EBS drives with Provisioned IOPS and EBS optimization. C. Multiple Amazon EFS volumes using the Network I lie System version 4.1 (NFSv4.1) protocol. D. Multiple instance store volumes with software RAID 0.

I will go with what mikmik is saying. A - S3 transfer accelerate is a wrong choice as we are not uploading from outside the system rater than processing on data in the system so this is wrong. B - We will need a higher throughput system, we are talking about large data and not large transaction. I had doughts about C and D as both are good , why we need to eliminate C it would be helpful. Till then will do with D - as the answer is confirmed correct by @mikmik

A company hosts its static website content from an Amazon S3 bucket in the us-east-1 Region. Content is made available through an Amazon CloudFront origin pointing to that bucket. Cross-Region replication is set to create a second copy of the bucket in the ap-southeast-1 Region. Management wants a solution that provides greater availability for the website.Which combination of actions should a solutions architect take to increase availability? (Choose two.) A. Add both buckets to the CloudFront origin. B. Configure failover routing in Amazon Route 53. C. Create a record in Amazon Route 53 pointing to the replica bucket. D. Create an additional CloudFront origin pointing to the ap-southeast-1 bucket. E. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.

ID&E: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html

A solution architect is performing a security review of a recently migrated workload. The workload is a web application that consists of Amazon EC2 instances in an Auto Scaling group behind an Application Load balancer. The solution architect must improve the security posture and minimize the impact of a DDoS attack on resources.Which solution is MOST effective? A. Configure an AWS WAF ACL with rate-based rules. Create an Amazon CloudFront distribution that points to the Application Load Balancer. Enable the EAF ACL on the CloudFront distribution. B. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access. C. Enable VPC Flow Logs and store then in Amazon S3. Create a custom AWS Lambda functions that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses. D. Enable Amazon GuardDuty and, configure findings written 10 Amazon CloudWatch. Create an event with Cloud Watch Events for DDoS alerts that triggers Amazon Simple Notification Service (Amazon SNS). Have Amazon SNS invoke a custom AWS lambda function that parses the logs looking for a DDoS attack. Modify a network ACL to block identified source IP addresses

IT SHOULD BE A - CloudFront with WAF can prevent DDOS attacks - https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/#:~:text=AWS%20WAF%20is%20a%20web,block%20by%20defining%20security%20rules.

A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company's security policy requires that all website traffic be inspected by AWS WAF.How should the solutions architect comply with these requirements? A. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only. B. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin. C. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront. D. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution.

It's D: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-awswaf.htm Changing to D as realized that OAI ensures traffic goes first to Cloudfront and not directly to S3 via resigned URL. Taking 'attach waf to distribution' as includes web traffic monitoring in this case.

A solutions architect is designing a multi-Region disaster recovery solution for an application that will provide public API access. The application will use AmazonEC2 instances with a userdata script to load application code and an Amazon RDS for MySQL database. The Recovery Time Objective (RTO) is 3 hours and theRecovery Point Objective (RPO) is 24 hours.Which architecture would meet these requirements at the LOWEST cost? A. Use an Application Load Balancer for Region failover. Deploy new EC2 instances with the userdata script. Deploy separate RDS instances in each Region. B. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script. Create a read replica of the RDS instance in a backup Region. C. Use Amazon API Gateway for the public APIs and Region failover. Deploy new EC2 instances with the userdata script. Create a MySQL read replica of the RDS instance in a backup Region. D. Use Amazon Route 53 for Region failover. Deploy new EC2 instances with the userdata script for APIs, and create a snapshot of the RDS instance daily for a backup. Replicate the snapshot to a backup Region.

Its D . Read replica require automated backup - https://brandonavant.com/ec2/rds_bkup_multiaz_readreplicas/

A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.What should the solutions architect do to reduce the overall data transfer costs? A. Place all the EC2 instances in an Auto Scaling group. B. Place all the EC2 instances in the same AWS Region. C. Place all the EC2 instances in the same Availability Zone. D. Place all the EC2 instances in private subnets in multiple Availability Zones.

Its batch processing, so HA/FT etc. is not critical. all EC2 can reside in same AZ, hence zero cost for data transfer between EC2 instances. costs can be reduced further S3 buckets are in same region

A solutions architect needs to ensure that all Amazon Elastic Block Store (Amazon EBS) volumes restored from unencrypted EBC snapshots are encrypted.What should the solutions architect do to accomplish this? A. Enable EBS encryption by default for the AWS Region. B. Enable EBS encryption by default for the specific volumes. C. Create a new volume and specify the symmetric customer master key (CMK) to use for encryption. D. Create a new volume and specify the asymmetric customer master key (CMK) to use for encryption.

Looking at the information in the below link, it must be A https://aws.amazon.com/premiumsupport/knowledge-center/ebs-automatic-encryption/

A company needs to comply with a regulatory requirement that states all emails must be stored and archived externally for 7 years. An administrator has created compressed email files on premises and wants a managed service to transfer the files to AWS storage.Which managed service should a solutions architect recommend? A. Amazon Elastic File System (Amazon EFS) B. Amazon S3 Glacier C. AWS Backup D. AWS Storage Gateway

My take is D https://aws.amazon.com/storagegateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

A company is migrating a NoSQL database cluster to Amazon EC2. The database automatically replicates data to maintain at least three copies of the data. I/O throughput of the servers is the highest priority. Which instance type should a solutions architect recommend for the migration? A. Storage optimized instances with instance store B. Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume C. Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled D. Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled

Storage optimized fit i/o requirement, but not suitable for DB, so not A. EBS Optimized instance can also boost IO performance, so C & D are candidates. (EBS-optimized instances enable EC2 instances to fully use the IOPS provisioned on an EBS volume. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 and 4,000 Megabits per second (Mbps) depending on the instance type used. ) https://aws.amazon.com/ec2/instance-types/ Memory optimized vs Compute optimized: In my humble understanding, NoSQL is more in-memory DB, so I go for Memory optimized. please kindly correct me if I'm wrong. C: Memory Optimized suited for High-performance, relational (MySQL) and NoSQL (MongoDB, Cassandra) databases. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/memory-optimized-instances.html

A company's near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.Which combination of steps should the solutions architect take? (Choose two.) A. Use Amazon Kinesis Data Firehose to ingest the data. B. Use AWS Lambda with AWS Step Functions to process the data. C. Use AWS Database Migration Service (AWS DMS) to ingest the data. D. Use Amazon EC2 instances in an Auto Scaling group to process the data. E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.

The solution is A and E There are 2 ingestion and 3 processor Since the near real-time we choose Firehose - A - First step We are left with processor, B , D and E We know lambda can run max for 15 min and the job is of 30 min so lambda is out. https://aws.amazon.com/lambda/faqs/#:~:text=AWS%20Lambda%20functions%20can%20be,1%20second%20and%2015%20minutes. We are left with D and E Both will work but the question specifies serverless hence E - step 2 https://aws.amazon.com/fargate/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&fargate-blogs.sort-by=item.additionalFields.createdDate&fargate-blogs.sort-order=desc So A and E is the solution

A company currently has 250 TB of backup files stored in Amazon S3 in a vendor's proprietary format. Using a Linux-based software application provided by the vendor, the company wants to retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload them to Amazon S3. The company wants to minimize the data transfer charges associated with this conversation.What should a solution architect do to accomplish this? A. Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3. B. Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine. C. Use AWS Snowball Edge device to expert the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices. D. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re- upload the files to Amazon S3 from the EC2 instance.

Yes D is correct. There is no Data transfer cost between EC2 and S3 in the same region per https://aws.amazon.com/ec2/pricing/on-demand/

An operations team has a standard that states IAM policies should not be applied directly to users. Some new members have not been following this standard.The operation manager needs a way to easily identify the users with attached policies.What should a solutions architect do to accomplish this? A. Monitor using AWS CloudTrail. B. Create an AWS Config rule to run daily. C. Publish IAM user changes to Amazon SNS. D. Run AWS Lambda when a user is modified.

ans - b By using AWS Config to evaluate your resource configurations, you can assess how well your resource configurations comply with internal practices, industry guidelines, and regulations.

Management has decided to deploy all AWS VPCs with IPv6 enabled. After some time, a solutions architect tries to launch a new instance and receives an error stating that there is not enough IP address space available in the subnet.What should the solutions architect do to fix this? A. Check to make sure that only IPv6 was used during the VPC creation. B. Create a new IPv4 subnet with a larger range, and then launch the instance. C. Create a new IPv6-only subnet with a large range, and then launch the instance. D. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.

cannot be A, C & D as "You cannot disable IPv4 support for your VPC and subnets; this is the default IP addressing system for Amazon VPC and Amazon EC2." in no way can you just use IPv6 So the answer is B

A company's packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to future reduce data transfer costs. The company cannot modify the application's source code.What should a solution architect do to reduce costs? A. Use Lambda@Edge to compress the files as they are sent to users. B. Enable Amazon S3 Transfer Acceleration to reduce the response times. C. Enable caching on the CloudFront distribution to store generated files at the edge. D. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.

will go with A. As per the question description . https://aws.amazon.com/lambda/edge/#:~:text=Lambda%40Edge%20is%20a%20feature,improves%20performance%20and%20reduces%20latency.&text=With%20Lambda%40Edge%2C%20you%20can,all%20with%20zero%20server%20administration.


Set pelajaran terkait

HDFS 2200 Midterm (Additional Materials and Inquizitive)

View Set

Escrow 1:Introducing Transaction Coordinators

View Set

ATI BOOK-PHARMACOLOGY: Unit 9: Ch 30-33: Medications for Pain and Inflammation

View Set

Foundations PI & Review Questions

View Set