AWS SAA C02

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

A business is in the process of transferring its apps to AWS. At the moment, on-premises apps create hundreds of terabytes of data, which is kept on a shared file system. The organization is using a cloud-based analytics solution to derive insights from this data on an hourly basis. The business requires a solution to manage continuous data transfer between its on-premises shared file system and Amazon S3. Additionally, the solution must be capable of coping with brief gaps in internet access. Which data transmission options should the business utilize to achieve these requirements? A. AWS DataSync B. AWS Migration Hub C. AWS Snowball Edge Storage Optimized D. AWS Transfer for SFTP

A. AWS DataSync A https://aws.amazon.com/datasync/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc What happens if an AWS DataSync task is interrupted? A: If a task is interrupted, for instance, if the network connection goes down or the AWS DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. When do I use AWS DataSync and when do I use AWS Snowball Edge? A: AWS DataSync is ideal for online data transfers. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. AWS Snowball Edge is ideal for offline data transfers, for customers who are bandwidth constrained, or transferring data from remote, disconnected, or austere environments. AWS Datasync has targets of EFS,S3 and FSx for Windows. S3 Transfer has S3 and EFS as endpoints. C-Snowball Edge Storage/Compute Optimised are used for physically moving into/out of S3(only SNowcone has AWS Datasync inbuilt) B-"AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions." Confused between A and D. Perhaps Datasync due to this specific FAQ on Internet loss: https://aws.amazon.com/datasync/faqs/ What happens if an AWS DataSync task is interrupted? A: If a task is interrupted, for instance, if the network connection goes down or the AWS DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination.

A business chooses to transition from on-premises to the AWS Cloud its three-tier web application. The new database must be able to scale storage capacity dynamically and conduct table joins. Which AWS service satisfies these criteria? A. Amazon Aurora B. Amazon RDS for SqlServer C. Amazon DynamoDB Streams D. Amazon DynamoDB on-demand

A. Amazon Aurora https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Performance.html Aurora storage automatically scales with the data in your cluster volume. As your data grows, your cluster volume storage expands up to a maximum of 128 tebibytes (TiB). To learn what kinds of data are included in the cluster volume

A solutions architect is improving a website in preparation for a forthcoming musical performance. Real-time streaming of the performances will be accessible, as well as on-demand viewing. The event is anticipated to draw a large internet audience from across the world. Which service will optimize both real-time and on-demand steaming performance? A. Amazon CloudFront B. AWS Global Accelerator C. Amazon Route S3 D. Amazon S3 Transfer Acceleration

A. Amazon CloudFront Answer: A You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin. One way you can set up video workflows in the cloud is by using CloudFront together with AWS Media Services. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/on-demand-streaming-video.html

Users may get past performance reports from a company's website. The website requires a solution that can grow to suit the company's worldwide website requirements. The solution should be cost-effective, minimize infrastructure resource provisioning, and deliver the quickest reaction time feasible. Which mix of technologies might a solutions architect propose in order to satisfy these requirements? A. Amazon CloudFront and Amazon S3 B. AWS Lambda and Amazon DynamoDB C. Application Load Balancer with Amazon EC2 Auto Scaling D. Amazon Route 53 with internal Application Load Balancers

A. Amazon CloudFront and Amazon S3 A. Cloudfront for rapid response and s3 to minimize infrastructure.

On its website, a business keeps a searchable store of things. The data is stored in a table with over ten million rows in an Amazon RDS for MySQL database. The database is stored on a 2 TB General Purpose SSD (gp2) array. Every day, the company's website receives millions of changes to this data. The organization found that certain activities were taking ten seconds or more and concluded that the bottleneck was the database storage performance. Which option satisfies the performance requirement? A. Change the storage type to Provisioned IOPS SSD (io1). B. Change the instance to a memory-optimized instance class. C. Change the instance to a burstable performance DB instance class. D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

A. Change the storage type to Provisioned IOPS SSD (io1). A. This is a case for I/O intensive operation: Getting the best performance from Amazon RDS Provisioned IOPS SSD storage If your workload is I/O constrained, using Provisioned IOPS SSD storage can increase the number of I/O requests that the system can process concurrently. Increased concurrency allows for decreased latency because I/O requests spend less time in a queue. Decreased latency allows for faster database commits, which improves response time and allows for higher database throughput. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

A meteorological start-up company has created a custom web application for the aim of selling weather data to its members online. The company currently uses Amazon DynamoDB to store its data and wishes to establish a new service that alerts the managers of four internal teams whenever a new weather event is recorded. The business does not want for this new service to impair the operation of the present application. What steps should a solutions architect take to guarantee that these objectives are satisfied with the MINIMUM feasible operational overhead? A. Create a DynamoDB table in on-demand capacity mode. B. Create a DynamoDB table with a global secondary Index. C. Create a DynamoDB table with provisioned capacity and auto scaling. D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

A. Create a DynamoDB table in on-demand capacity mode.

The web application of a business stores its data on an Amazon RDS PostgreSQL database instance. Accountants conduct massive queries at the start of each month during the financial closure period, which has a negative influence on the database's performance owing to excessive utilization. The business want to reduce the effect of reporting on the online application. What should a solutions architect do to minimize the database's influence with the LEAST amount of work possible? A. Create a read replica and direct reporting traffic to the replica. B. Create a Multi-AZ database and direct reporting traffic to the standby. C. Create a cross-Region read replica and direct reporting traffic to the replica. D. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.

A. Create a read replica and direct reporting traffic to the replica. Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines' built-in replication functionality to create a special type ofDB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica.When you create a read replica, you first specify an existing DB instance as the source. Then Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections. Applications connect to a read replica the same way they do to any DB instance. Amazon RDS replicates all databases in the source DB instance.Reference:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

A firm seeks to migrate its accounting system from an on-premises data center to an Amazon Web Services (AWS) Region. Data security and an unalterable audit log should be prioritized. All AWS activities must be subjected to compliance audits. Despite the fact that the business has enabled AWS CloudTrail, it want to guarantee that it meets these requirements. What precautions and security procedures should a solutions architect include to protect and secure CloudTrail? (Choose two.) A. Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket. B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOrigin element. C. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle management rule to save photos into the second S3 bucket. D. Create a second S3 bucket in us-east-1 to store the replicated photos. Configure S3 event notifications on object creation and update events that invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

A. Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket. B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOrigin element.

Each day, a company's hundreds of edge devices create 1 TB of status alerts. Each alert has a file size of roughly 2 KB. A solutions architect must provide a system for ingesting and storing warnings for further investigation. The business need a solution that is extremely accessible. However, the business must have a low cost structure and does not want to handle extra infrastructure. Additionally, the corporation intends to retain 14 days of data for instant examination and archive any older data. What is the MOST OPTIMAL option that satisfies these requirements? A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days. . D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.

A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. "solution to ingest and store the alerts for future analysis" so B and D is out. The Answer is A because any data older than 14 days must be archived not deleted.

An Amazon EC2 instance-based application requires access to an Amazon DynamoDB database. The EC2 instance and DynamoDB table are both managed by the same AWS account. Permissions must be configured by a solutions architect. Which approach will provide the EC2 instance least privilege access to the DynamoDB table? A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. B. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role. C. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly. D. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.

A. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. A is correct Roles are designed to be "assumed" by other principals which do define "who am I?", such as users, Amazon services, and EC2 instances. An instance profile, on the other hand, defines "who am I?" Just like an IAM user represents a person, an instance profile represents EC2 instances. The only permissions an EC2 instance profile has is the power to assume a role. So the EC2 instance runs under the EC2 instance profile, defining "who" the instance is. It then "assumes" the IAM role, which ultimately gives it any real power. https://medium.com/devops-dudes/the-difference-between-an-aws-role-and-an-instance-profile-ae81abd700d#:~:text=Roles%20are%20designed%20to%20be,instance%20profile%20represents%20EC2%20instances.

A business's production workload is hosted on an Amazon Aurora MySQL DB cluster comprised of six Aurora Replicas. The corporation wishes to automate the distribution of near-real-time reporting requests from one of its departments among three Aurora Replicas. These three copies are configured differently from the rest of the DB cluster in terms of computation and memory. Which solution satisfies these criteria? A. Create and use a custom endpoint for the workload. B. Create a three-node cluster clone and use the reader endpoint. C. Use any of the instance endpoints for the selected three nodes. D. Use the reader endpoint to automatically distribute the read-only workload.

A. Create and use a custom endpoint for the workload.

A firm just launched a two-tier application in the us-east-1 Region's two Availability Zones. Databases are located on a private subnet, whereas web servers are located on a public subnet. The VPC is connected to the internet through an internet gateway. Amazon EC2 instances are used to host the application and database. The database servers are unable to connect to the internet in order to get fixes. A solutions architect must create a system that ensures database security while incurring the fewest operating costs. Which solution satisfies these criteria? A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. B. Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. C. Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route. D. Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route.

A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. Ans: A Piece of cake. "least operational overhead". You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances

A business intends to transfer a TCP-based application onto the company's virtual private cloud (VPC). The program is available to the public over an unsupported TCP port via a physical device located in the company's data center. This public endpoint has a latency of less than 3 milliseconds and can handle up to 3 million requests per second. The organization needs the new public endpoint in AWS to function at the same level of performance. What solution architecture approach should be recommended to satisfy this requirement? A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires. B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires. C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as the origin. D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions with provisioned concurrency to process the requests.

A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires. Answer should be (A), since we are required to be able to handle 3 million request per second. A NLB is able to handle up to tens of millions of requests per second, while providing high performance and low latency. https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

A business uses an Amazon RDS for PostgreSQL database instance to manage a fleet of web servers. Following a normal compliance review, the corporation establishes a standard requiring all production databases to have a recovery point objective (RPO) of less than one second. Which solution satisfies these criteria? A. Enable a Multi-AZ deployment for the DB instance. B. Enable auto scaling for the DB instance in one Availability Zone. C. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone. D. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.

A. Enable a Multi-AZ deployment for the DB instance. The RDS Multi-AZ configuration is the recommended approach for production environments due to its ability to support low RTO (recovery time objective) and RPO (recovery point objective) requirements.

Amazon S3 is used by a business to store private audit records. According to the concept of least privilege, the S3 bucket implements bucket restrictions to limit access to audit team IAM user credentials. Company executives are concerned about inadvertent document destruction in the S3 bucket and need a more secure solution. What steps should a solutions architect take to ensure the security of audit documents? A. Enable the versioning and MFA Delete features on the S3 bucket. B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. C. Add an S3 Lifecycle policy to the audit teamג€™s IAM user accounts to deny the s3:DeleteObject action during audit dates. D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.

A. Enable the versioning and MFA Delete features on the S3 bucket. "accidental deletion of documents in the S3 bucket and want a more secure solution" 101% is A

AWS is used by a business to perform an online transaction processing (OLTP) burden. This workload is deployed in a Multi-AZ environment using an unencrypted Amazon RDS database instance. This instance's database is backed up daily. What should a solutions architect do going forward to guarantee that the database and snapshots are constantly encrypted? A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot. B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance. C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance. D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).

A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot. You can't restore from a DB snapshot to an existing DB instance; a new DB instance is created when you restore. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html#USER_RestoreFromSnapshot.CON So answer A is right When you restore from a snapshot a new DB instance is provisioned

On a single Amazon EC2 instance, a business runs an ASP.NET MVC application. Due to a recent spike in application usage, users are experiencing poor response times during lunch hours. The firm must address this issue using the least amount of settings possible. What recommendations should a solutions architect make to satisfy these requirements? A. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours. B. Move the application to Amazon Elastic Container Service (Amazon ECS). Create an AWS Lambda function to handle scaling during lunch hours. C. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours. D. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.

A. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-autoscaling-scheduledactions.html

A solutions architect desires that all new users meet particular difficulty standards and are required to rotate their IAM user passwords on a regular basis. What is the solution architect's role in achieving this? A. Set an overall password policy for the entire AWS account B. Set a password policy for each IAM user in the AWS account. C. Use third-party vendor software to set password requirements. D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.

A. Set an overall password policy for the entire AWS account A Ref: You can set a custom password policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users' passwords. If you don't set a custom password policy, IAM user passwords must meet the default AWS password policy. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html

A business's data layer is powered by Amazon RDS for PostgreSQL databases. The organization must adopt database password rotation. Which option satisfies this criterion with the LEAST amount of operational overhead? A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret. B. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter. C. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password. D. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).

A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret. Agreed answer is (A), only service that rotates credentials automatically is secrets manager. https://aws.amazon.com/secrets-manager/ https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html (reference note)

For the database layer of its ecommerce website, a firm uses Amazon DynamoDB with provided throughput. During flash sales, clients may encounter periods of delay when the database is unable to manage the volume of transactions. As a result, the business loses transactions. The database operates normally during regular times. Which approach resolves the company's performance issue? A. Switch DynamoDB to on-demand mode during flash sales. B. Implement DynamoDB Accelerator for fast in memory performance. C. Use Amazon Kinesis to queue transactions for processing to DynamoDB. D. Use Amazon Simple Queue Service (Amazon SQS) to queue transactions to DynamoDB.

A. Switch DynamoDB to on-demand mode during flash sales.

Amazon Redshift is being used by a business to do analytics and produce customer reports. The corporation just obtained an extra 50 terabytes of demographic data on its customers. The data is saved in Amazon S3 in.csv files. The organization need a system that efficiently merges data and visualizes the findings. What recommendations should a solutions architect make to satisfy these requirements? A. Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations. B. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations. C. Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift. D. Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.

A. Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations. AWS Redshift Spectrum is a feature that comes automatically with Redshift. It can execute SQL queries on CSV files that are stored in S3 using AWS Redshift Spectrum and the EXTERNAL command. And then add Amazon QuickSight for visualization. Use Amazon Redshift Spectrum to query data in Amazon S3 files without having to load the data into Amazon Redshift tables. Amazon Redshift provides SQL capability designed for fast online analytical processing (OLAP) of very large datasets that are stored in both Amazon Redshift clusters and Amazon S3 data lakes. https://docs.aws.amazon.com/redshift/latest/gsg/concepts-diagrams.html

A business is using a tape backup system to offshore store critical application data. Daily data volume is in the neighborhood of 50 TB. For regulatory requirements, the firm must maintain backups for seven years. Backups are infrequently viewed, and a week's notice is normally required before restoring a backup. The organization is now investigating a cloud-based solution in order to cut storage expenses and the operational load associated with tape management. Additionally, the organization wants to ensure that the move from tape backups to the cloud is as seamless as possible. Which storage option is the CHEAPEST? A. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive. B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier. C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier. D. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.

A. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.

Amazon EC2 instances on private subnets are used to execute an application. The application requires access to a table in Amazon DynamoDB. What is the MOST SECURE method of accessing the table without allowing traffic to exit the AWS network? A. Use a VPC endpoint for DynamoDB. B. Use a NAT gateway in a public subnet. C. Use a NAT instance in a private subnet. D. Use the internet gateway attached to the VPC.

A. Use a VPC endpoint for DynamoDB. Explanantion VPC Enpoint An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services. AWS PrivateLink access over Inter-Region VPC Peering.

To allow neat-real-time processing, a web application must persist order data to Amazon S3. A solutions architect must design a scalable and fault-tolerant architecture. Which solutions satisfy these criteria? (Select two.) A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. B. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use the queue to trigger an AWSLambda function that parsers the payload and writes the data to Amazon S3. C. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. E. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload andwrites the data to Amazon S3.

A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. C. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. A: Changes in DynamoDB (Create, Update, Delete) can end up in a DynamoDB Stream. This stream can be read by AWS Lambda, KCL App and KDS. We can then React to changes in real time. C: SNS works in real time. Lambda is a valid subscriber(list includes SQS,Lambda, HTTPs,Kinesis Data FireHose, email,mobile) B&D are not suitable since SQS(polling mode) is not real time. E is unsuitable because EventBridge is NOT valid SNS destination(subscriber).

A new employee has been hired as a deployment engineer by a corporation. The deployment engineer will construct several AWS resources using AWS CloudFormation templates. A solutions architect desires that the deployment engineer execute job functions with the least amount of privilege possible. Which steps should the solutions architect do in conjunction to reach this goal? (Select two.) A. Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations. B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached. C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached. D. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only. E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.

Answer is D and E. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html

On Amazon EC2 instances, a business runs an application. The volume of traffic to the webpage grows significantly during business hours and then falls. The CPU usage of an Amazon EC2 instance is a good measure of the application's end-user demand. The organization has specified a minimum group size of two EC2 instances and a maximum group size of ten EC2 instances for an Auto Scaling group. The firm is worried that the Auto Scaling group's existing scaling policy may be incorrect. The organization must prevent excessive EC2 instance provisioning and paying unneeded fees. What recommendations should a solutions architect make to satisfy these requirements? A. Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours. B. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling. C. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values. D. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.

B. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling.

A solutions architect is developing a hybrid application on the Amazon Web Services (AWS) cloud. AWS Direct Link (DX) will be used to connect the on-premises data center to AWS. Between AWS and the on-premises data center, the application connection must be very durable. Which DX setup should be used to satisfy these criteria? A. Configure a DX connection with a VPN on top of it. B. Configure DX connections at multiple DX locations. C. Configure a DX connection using the most reliable DX partner. D. Configure multiple virtual interfaces on top of a DX connection.

B. Configure DX connections at multiple DX locations. Highly resilient, fault-tolerant network connections are key to a well-architected system. AWS recommends connecting from multiple data centers for physical location redundancy. https://aws.amazon.com/directconnect/resiliency-recommendation/

A business uses Amazon Elastic Container Service (Amazon ECS) to perform an image processing workload on two private subnets. Each private subnet connects to the internet through a NAT instance. Amazon S3 buckets are used to store all photos. The business is worried about the expenses associated with data transfers between Amazon ECS and Amazon S3. What actions should a solutions architect do to save money? A. Configure a NAT gateway to replace the NAT instances. B. Configure a gateway endpoint for traffic destined to Amazon S3. C. Configure an interface endpoint for traffic destined to Amazon S3. D. Configure Amazon CloudFront for the S3 bucket storing the images.

B. Configure a gateway endpoint for traffic destined to Amazon S3. B. Configure a gateway endpoint for traffic destined to Amazon S3. Because data transfer between Amazon ECS and Amazon S3 without going out the Internet, so using S3 Gateway Endpoint is enough. Answer is B: S3 support both Gateway and Interface endpoints. But gateway endpoints to S3 is NOT billed. Questions asks to reduce the cost. For more information , check the comparison of S3 Gateway endpoint and S3 Interface given in, https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html

A company's ecommerce site is seeing a rise in visitor visits. The company's shop is implemented as a two-tier two application on Amazon EC2 instances, with a web layer and a separate database tier. As traffic rises, the organization detects severe delays in delivering timely marketing and purchase confirmation emails to consumers due to the design. The organization wishes to decrease the amount of time spent addressing difficult email delivery problems and to cut operating costs. What actions should a solutions architect take to ensure that these criteria are met? A. Create a separate application tier using EC2 instances dedicated to email processing. B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES). C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS). D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.

B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES). The answer has to B according to below https://aws.amazon.com/ses/ Use cases Transactional emails Send immediate, trigger-based communications from your application to customers, such as purchase confirmations or password resets. Marketing emails Promote your products and services such as special offers and newsletters, with customized content and email templates. Bulk email communication Send bulk communications, including notifications and announcements, to large communities, and track results using configuration sets. Answer is B. https://aws.amazon.com/ses/ Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application. You can configure Amazon SES quickly to support several email use cases, including transactional, marketing, or mass email communications. Amazon SES's flexible IP deployment and email authentication options help drive higher deliverability and protect sender reputation, while sending analytics measure the impact of each email. With Amazon SES, you can send email securely, globally, and at scale.

A financial institution uses AWS to host a web application. The program retrieves current stock prices using an Amazon API Gateway Regional API endpoint. The security staff at the organization has detected an upsurge in API queries. The security team is worried that HTTP flood attacks may result in the application being rendered inoperable. A solutions architect must create a defense against this form of assault. Which method satisfies these criteria with the LEAST amount of operational overhead? A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours. B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage. C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached. D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda function to block requests from IP addresses that exceed the predefined rate.

B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage. This is a form of DDOS protection. So AWS WAF does the best with least efforts. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html

Management need a summary of AWS billed items broken down by user as part of their budget planning process. Budgets for departments will be created using the data. A solutions architect must ascertain the most effective method of obtaining this report data. Which solution satisfies these criteria? A. Run a query with Amazon Athena to generate the report. B. Create a report in Cost Explorer and download the report. C. Access the bill details from the billing dashboard and download the bill. D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).

B. Create a report in Cost Explorer and download the report. Cost Explorer generates the AWS Cost and Usage Reports and the detailed billing reports. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html

Each month, a business keeps 200 GB of data on Amazon S3. At the conclusion of each month, the corporation must analyze this data to calculate the number of things sold in each sales area during the preceding month. Which analytics approach is the MOST cost-effective option for the business? A. Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana. B. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight. C. Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight. D. Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.

B. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight.

A development team must have a website that is accessible to other development teams. HTML, CSS, client-side JavaScript, and graphics comprise the website's content. Which form of website hosting is the MOST cost-effective? A. Containerize the website and host it in AWS Fargate. B. Create an Amazon S3 bucket and host the website there . C. Deploy a web server on an Amazon EC2 instance to host the website. D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

B. Create an Amazon S3 bucket and host the website there Static vs Dynamic Website : In Static Websites, Web pages are returned by the server which are prebuilt. They use simple languages such as HTML, CSS, or JavaScript. There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast. There is no interaction with databases. Also, they are less costly as the host does not need to support server-side processing with different languages. ============ In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user's demand. These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server. So, they are slower than static websites but updates and interaction with databases are possible.

Each entry to a company's facility is equipped with badge readers. When badges are scanned, the readers transmit an HTTPS message indicating who tried to enter that specific entry. A solutions architect must develop a system that will handle these sensor signals. The solution must be highly accessible, with the findings made available for analysis by the company's security staff. Which system design should be recommended by the solutions architect? A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the results to an Amazon S3 bucket. B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table . C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table. D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.

B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table. B is correct. A is based on servers. B uses serverless(lambda) C,not very sure, can Route 53 directly route to Lambda without the API GW in front? D,overkill, and we still need some processing before writing the data to S3 which is missing.

A business's on-premises data center has reached its storage limit. The organization wishes to shift its storage system to AWS while keeping bandwidth costs as low as possible. The solution must enable rapid and cost-free data retrieval. How are these stipulations to be met? A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload. B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3. D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.

B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. Answer is B. C is wrong because they already running out of space at on-premises. So why would they store the data again loacally .

A business is creating a website that will store static photos in an Amazon S3 bucket. The company's goal is to reduce both latency and cost for all future requests. How should a solutions architect propose a service configuration? A. Deploy a NAT server in front of Amazon S3. B. Deploy Amazon CloudFront in front of Amazon S3. C. Deploy a Network Load Balancer in front of Amazon S3. D. Configure Auto Scaling to automatically adjust the capacity of the website.

B. Deploy Amazon CloudFront in front of Amazon S3. Keywords are static content on S3 and Faster response

On Amazon EC2 instances, a business runs an application. The application is deployed on private subnets inside the us-east-1 Region's three Availability Zones. The instances must have internet access in order to download files. The organization is looking for a design that is readily accessible across the Region. Which solution should be done to guarantee that internet access is not disrupted? A. Deploy a NAT instance in a private subnet of each Availability Zone. B. Deploy a NAT gateway in a public subnet of each Availability Zone. C. Deploy a transit gateway in a private subnet of each Availability Zone. D. Deploy an internet gateway in a public subnet of each Availability Zone.

B. Deploy a NAT gateway in a public subnet of each Availability Zone. NAT instance/GW is used to give internet access to EC2 in private subnets. NAT instance/GW is always in Public Subnet. RT of private subnet contains a route to NAT GW/NAT instance. Choose NAT GW (AWS Managed)over NAT instance if above is satisfied. Answer=B

A business uses the SMB protocol to back up on-premises databases to local file server shares. To accomplish recovery goals, the organization needs instant access to one week's worth of backup data. After a week, recovery is less possible, and the business may live with a delay in retrieving those earlier backup data. What actions should a solutions architect take to ensure that these criteria are met with the LEAST amount of operational work possible? A. Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups. B. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway. C. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups. D. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.

B. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway. B is correct because is on primes Q. How do I access an Amazon EFS file system from servers in my on-premises datacenter? To access Amazon EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an Amazon EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system via the NFSv4.1 protocol. ============== FSx for Windows File Server supports the use of AWS Direct Connect or AWS VPN to access your file systems from your on-premises compute instances. FSx for Windows File Server also supports the use of Amazon FSx File Gateway to provide low latency, seamless access to your in-cloud FSx for Windows File Server file shares from your on-premises compute instances. =============== FSx for Lustre AWS Direct Connect or VPN. ============ Note that EBS volumes CANNOT be accessed on-prem.

A business offers its customers with an API that automates tax calculations based on item pricing. During the Christmas season, the firm receives an increased volume of queries, resulting in delayed response times. A solutions architect must create a scalable and elastic system. What is the solution architect's role in achieving this? A. Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations . C. Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. D. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.

B. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations A. This isn't a scalable and elastic option. B. Sounds about right, Api Gateway is scalable, and elastic, same as Lambda. C. How is this elastic? We need an ASG. D. It doesn't have elasticity or scalability.

A business's data warehouse is powered by Amazon Redshift. The firm want to assure the long-term viability of its data in the event of component failure. What recommendations should a solutions architect make? A. Enable concurrency scaling. B. Enable cross-Region snapshots. C. Increase the data retention period. D. Deploy Amazon Redshift in Multi-AZ.

B. Enable cross-Region snapshots. Ans B, enable cross region snapshots. That will improve durability. Multi-AZ is not supported with RedShift. https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-improves-performance-of-inter-region-snapshot-transfers/ Performance enhancements have been made that allow Amazon Redshift to copy snapshots across regions much faster, allowing customers to support much more aggressive Recovery Time Objective (RTO) and Recovery Point Objective (RPO) Disaster Recovery (DR) policies

A business is in the process of migrating its on-premises application to AWS. Program servers and a Microsoft SQL Server database comprise the application. The database cannot be transferred to another engine due to the application's NET code using SQL Server functionality. The company's goal is to maximize availability while decreasing operational and administration costs. What actions should a solutions architect take to achieve this? A. Install SQL Server on Amazon EC2 in a Multi-AZ deployment. B. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment. C. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas. D. Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment.

B. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment. Answer is B and not D because: Microsoft SQL Server Multi-AZ deployment notes and recommendations The following are some restrictions when working with Multi-AZ deployments for Microsoft SQL Server DB instances: Cross-Region Multi-AZ isn't supported. You can't configure the secondary DB instance to accept database read activity. Multi-AZ with Always On Availability Groups (AGs) supports in-memory optimization. Multi-AZ with Always On Availability Groups (AGs) doesn't support Kerberos authentication for the availability group listener. This is because the listener has no Service Principal Name (SPN). You can't rename a database on a SQL Server DB instance that is in a SQL Server Multi-AZ deployment. If you need to rename a database on such an instance, first turn off Multi-AZ for the DB instance, then rename the database. Finally, turn Multi-AZ back on for the DB instance. You can only restore Multi-AZ DB instances that are backed up using the full recovery model.

A database is hosted on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that is subjected to high-volume reads. When evaluating read performance from a secondary AWS Region, application developers detect a considerable lag. The developers need a solution that has a read replication latency of less than one second. What recommendations should the solutions architect make? A. Install MySQL on Amazon EC2 in the secondary Region. B. Migrate the database to Amazon Aurora with cross-Region replicas. C. Create another RDS for MySQL read replica in the secondary Region. D. Implement Amazon ElastiCache to improve database query performance.

B. Migrate the database to Amazon Aurora with cross-Region replicas. Q: What happens when I convert my RDS instance from Single-AZ to Multi-AZ? For the RDS for MySQL, MariaDB, PostgreSQL and Oracle database engines, when you elect to convert your RDS instance from Single-AZ to Multi-AZ, the following happens: A snapshot of your primary instance is taken A new standby instance is created in a different Availability Zone, from the snapshot Synchronous replication is configured between primary and standby instances As such, there should be no downtime incurred when an instance is converted from Single-AZ to Multi-AZ. However, you may see increased latency while the data on the standby is caught up to match to the primary. https://aws.amazon.com/rds/faqs/?nc1=h_ls with aurora you can get very short latency even with multi-region deployment

A business must give secure access to secret and sensitive data to its workers. The firm want to guarantee that only authorized individuals have access to the data. The data must be safely downloaded to workers' devices.The files are kept on a Windows file server on-premises. However, as remote traffic increases, the file server's capacity is being depleted. Which solution will satisfy these criteria? A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employeesג€™ IP addresses. B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN. C. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download. D. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS Single Sign-On.

B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN. since the Windows file server is on-premise and we need something to replicate the data to the cloud, the only option we have is AWS FSx for Windows File Server. Also, since the information is confidential and sensitive, we also want to make sure that the appropriate users have access to it in a secure manner. B) CORRECT -> Amazon FSx for Win File Server All other solutions don't support Active Directory A) files are on-premise C) Signed URL are part of the CloudFront, pre-Signed URLs are part of S3 D) I really dont think S3 is solution in this example https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html

A firm runs a two-tier image processing program. The application is divided into two Availability Zones, each with its own public and private subnets.The web tier's Application Load Balancer (ALB) makes use of public subnets. Private subnets are used by Amazon EC2 instances at the application layer.The program is functioning more slowly than planned, according to users. According to a security audit of the web server log files, the application receives millions of unauthorized requests from a tiny number of IP addresses. While the organization finds a more permanent solution, a solutions architect must tackle the urgent performance issue. What solution architecture approach should be recommended to satisfy this requirement? A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources. B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources. C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources. D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.

B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources. Let's rule out any deny rule involving an SG. Because there's no such thing. Out go A & C. So why B over D? Simple - nip the problem in the bud. A robber is trying to steal something from your house --- take him down before he enters your house, not after he breaks in.

A business's application makes use of AWS Lambda functions. A code examination reveals that database credentials are stored in the source code of a Lambda function, violating the company's security policy. To comply with security policy requirements, credentials must be safely maintained and automatically cycled on a regular basis. What should a solutions architect propose as the MOST SECURE method of meeting these requirements? A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password. B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password. C. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically rotate the uploaded password. D. Move the database password to an environment variable that is associated with the Lambda function. Retrieve the password from the environment variable by invoking the function. Create a deployment script to automatically rotate the password.

B. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password. B.Secrets Manager B.AWS Secrets Manager Secrets Manager: It was designed specifically for confidential information (like database credentials, API keys) that needs to be encrypted, so the creation of a secret entry has encryption enabled by default. It also gives additional functionality like rotation of keys. Systems Manager Parameter Store: It was designed to cater to a wider use case, not just secrets or passwords, but also application configuration variables like URLs, Custom settings, AMI IDs, License keys, etc. Secrets Manager offers rotation of keys inbuilt. It is integrated well with RDS. ================== KMS is altogether different concept. KMS is a service that manages encryption keys('Customer Master keys',not Data keys). A 'data key' is used to encrypt the actual data data. CMK is basically used to protect the data key which is used for encrypting data. To decrypt the data,one calls the KMS service and uses the CMK to decrypt the 'data key'.Once we have the decrypted(plaintext) data key, we use the same to decrypt the actual data. When thinking KMS/CMK--- -think about Cx managed/Aws Managed Keys as options -think encryption at rest -think encrypting master key, not data key ======================= HSM is alternative to KMS for encrypting same CMK. AWS provisions the encryption hardware ,not the software.

Amazon Elastic Block Store (Amazon EBS) volumes are used by a media organization to store video material. A certain video file has gained popularity, and a significant number of individuals from all over the globe are now viewing it. As a consequence, costs have increased. Which step will result in a cost reduction without jeopardizing user accessibility? A. Change the EBS volume to Provisioned IOPS (PIOPS). B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution. C. Split the video into multiple, smaller segments so users are routed to the requested video segments only. D. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.

B. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution. Store video in S3 and use Cloudfront for distribution. So option B is correct.

A corporation with an on-premises application is transitioning to AWS to boost the flexibility and availability of the application. The present design makes considerable use of a Microsoft SQL Server database. The firm want to investigate other database solutions and, if necessary, migrate database engines. The development team does a complete copy of the production database every four hours in order to create a test database. Users will encounter delay during this time period. What database should a solution architect propose as a replacement? A. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database. B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database. C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database. D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.

B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database. B is the correct answer. Points to be noted in Q: 1. Question itself states " What should a solution architect recommend as replacement database?" 2. " users experience latency" when backup is taken from SQL Server. This means an alternate DB needs to be considered. Migrating to Aurora will eliminate this latency. For SQL Server, I/O activity is suspended briefly during backup - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html 3. Elasticity, availability, replicas - everything is provided by Aurora

On AWS, a business is developing a document storage solution. The application is deployed across different Amazon EC2 Availability Zones. The firm demands a highly accessible document storage. When requested, documentation must be returned quickly. The lead engineer has setup the application to store documents in Amazon Elastic Block Store (Amazon EBS), but is open to examine additional solutions to fulfill the availability requirement. What recommendations should a solutions architect make? A. Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones. B. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. C. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier. D. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.

B. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. B. EFS would have worked best if we wanted to keep 'file' structure storage./no option here Documents can go on S3 as objects. EBS DOES support multiple instance(EBS Multi Attach) we configure as io1/io2/io2 block express. BUT EBS does not work cross AZ--so A,D eliminated. C is out because we need it to be 'returned quickly' Glacier has retrieval process Expedited:1-5 minutes Standard:3-5 hours Bulk:5 hrs-1/2 day Quick comparison from my notes9EBS vs EFS vs S3) https://1drv.ms/x/s!Al2WmWQmp2xZtmeDzRWwXycUnh_o

A business uses Site-to-Site VPN connections to provide safe access to AWS Cloud services from on-premises. Users are experiencing slower VPN connectivity as a result of increased traffic through the VPN connections to the Amazon EC2 instances. Which approach will result in an increase in VPN throughput? A. Implement multiple customer gateways for the same network to scale the throughput. B. Use a transit gateway with equal cost multipath routing and add additional VPN tunnels. C. Configure a virtual private gateway with equal cost multipath routing and multiple channels. D. Increase the number of tunnels in the VPN configuration to scale the throughput beyond the default limit.

B. Use a transit gateway with equal cost multipath routing and add additional VPN tunnels. https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-vpn-throughput-using-aws-transit-gateway/ With AWS Transit Gateway, you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection. AWS Transit Gateway also enables you to scale the IPsec VPN throughput with equal cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default limit of 1.25 Gbps.

Within a month of being bought, a newly acquired firm is needed to establish its own infrastructure on AWS and transfer various apps to the cloud. Each application requires the transmission of around 50 TB of data. Following the transfer, this firm and its parent company will need secure network connection with constant throughput between its data centers and apps. A solutions architect must guarantee that data transfer occurs just once and that network connection is maintained. Which solution will satisfy these criteria? A. AWS Direct Connect for both the initial transfer and ongoing connectivity. B. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity. C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. D. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

C. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. Definitely snowball for data transfer. As per the connectivity, 4 requirements here: Complete WITHIN a month, secure, consistent and BOTH companies need access. DirectConnect provides consistency, but it's not secure (it's private, that's different), it's a 1-to-1 direct connection and then you have up to 90 days to set it up (on average it takes more than 1 month, or at least that's what they said at the course). Site-to-Site VPN is secure and can be setup immediately in multiple sites. It's not consistent, but it can achieve consistency with Accelerated Site-to-Site VPN ("Accelerated Site-to-Site VPN makes user experience more consistent by using the highly available and congestion-free AWS global network.") So I'd go for Snowball and VPN (possibly Accelerated)

A business has two virtual private clouds (VPCs) labeled Management and Production. The Management VPC connects to a single device in the data center using VPNs via a customer gateway. The Production VPC is connected to AWS through two AWS Direct Connect connections via a virtual private gateway. Both the Management and Production VPCs communicate with one another through a single VPC peering connection. What should a solutions architect do to minimize the architecture's single point of failure? A. Add a set of VPNs between the Management and Production VPCs. B. Add a second virtual private gateway and attach it to the Management VPC. C. Add a second set of VPNs to the Management VPC from a second customer gateway device. D. Add a second VPC peering connection between the Management VPC and the Production VPC.

C. Add a second set of VPNs to the Management VPC from a second customer gateway device. A is out - Regarding the VPC Peering "There is no single point of failure for communication or a bandwidth bottleneck". So there is no need to create a redundancy mechanism when you already have a VPC Peering in place. https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html B is out - "You can attach one virtual private gateway to a VPC at a time." https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-limits.html D is out - You can only have one VPC Peering per VPC pair. "A VPC peering connection is a one to one relationship between two VPCs." VPChttps://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html C is correct. "To protect against a loss of connectivity in case your customer gateway device becomes unavailable, you can set up a second Site-to-Site VPN connection to your VPC and virtual private gateway by using a second customer gateway device." https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html

A marketing firm uses an Amazon S3 bucket to store CSV data for statistical research. Permission is required for an application running on an Amazon EC2 instance to properly handle the CSV data stored in the S3 bucket. Which step will provide the MOST SECURE access to the S3 bucket for the EC2 instance? A. Attach a resource-based policy to the S3 bucket. B. Create an IAM user for the application with specific permissions to the S3 bucket. C. Associate an IAM role with least privilege permissions to the EC2 instance profile. D. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.

C. Associate an IAM role with least privilege permissions to the EC2 instance profile. Keyword: Privilege Permission + IAM Role AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.

For each of its developer accounts, a corporation has configured AWS CloudTrail logs to transport log files to an Amazon S3 bucket. The organization has established a centralized AWS account for the purpose of facilitating administration and auditing. Internal auditors need access to CloudTrail logs, however access to all developer account users must be limited. The solution should be both secure and efficient. How should a solutions architect address these considerations? A. Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket. C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.

C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. C is the better option as cloudtrail can send logs directly to s3 bucket in other account. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html

A business is implementing a web gateway. The firm want to limit public access to the program to the online part. The VPC was created with two public subnets and two private subnets to achieve this. The application will be hosted on many Amazon EC2 instances that will be managed through an Auto Scaling group. SSL termination must be delegated to a separate instance on Amazon EC2. What actions should a solutions architect take to guarantee compliance with these requirements? A. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. B. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer. C. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. D. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.

C. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. C since Internet-facing Application Load Balancers (ALB) and Classic ELBs must be provisioned exclusively in public subnets.

A solutions architect is designing a VPC architecture with various subnets. Six subnets will be used in two Availability Zones. Subnets are classified as public, private, and database-specific. Access to a database should be restricted to Amazon EC2 instances operating on private subnets. Which solution satisfies these criteria? A. Create a now route table that excludes the route to the public subnetsג€™ CIDR blocks. Associate the route table to the database subnets. B. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance. C. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance. D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

C. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance. Security groups are stateful. All inbound traffic is blocked by default. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again. You cannot block specific IP address using Security groups (instead use Network Access Control Lists).

Currently, a business runs a web application that is backed up by an Amazon RDS MySQL database. It features daily automatic backups that are not encrypted. A security audit entails the encryption of future backups and the destruction of unencrypted backups. Before deleting the previous backups, the firm will create at least one encrypted backup. What should be done to allow encrypted backups in the future? A. Enable default encryption for the Amazon S3 bucket where backups are stored. B. Modify the backup section of the database configuration to toggle the Enable encryption check box. C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot. D. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.

C. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot. However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance. DB instances that are encrypted can't be modified to disable encryption. You can't have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance. Encrypted read replicas must be encrypted with the same key as the source DB instance when both are in the same AWS Region. You can't restore an unencrypted backup or snapshot to an encrypted DB instance. To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key identifier of the destination AWS Region. This is because KMS encryption keys are specific to the AWS Region that they are created in. Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

A business is re-architecting a tightly connected application in order to make it loosely coupled. Previously, the program communicated across layers through a request/response pattern. The organization intends to do this via the usage of Amazon Simple Queue Service (Amazon SQS). The first architecture includes a request queue and a response queue. However, when the program grows, this strategy will not handle all messages. What is the best course of action for a solutions architect to take in order to tackle this issue? A. Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue. B. Configure a FIFO queue, and use the message deduplication ID and message group ID. C. Create a temporary queue, with the Temporary Queue Client to receive each response message. D. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.

C. Create a temporary queue, with the Temporary Queue Client to receive each response message. "we discussed the Request-Response Messaging Pattern. In this pattern, each requester creates a temporary destination to receive each response message. The simplest approach is to create a new queue for each response, but this is like building a road just so a single car can drive on it before tearing it down. Technically, this can work (and SQS can create and delete queues quickly), but we can definitely make it faster and cheaper. To better support short-lived, lightweight messaging destinations, we are pleased to present the Amazon SQS Temporary Queue Client. This client makes it easy to create and delete many temporary messaging destinations without inflating your AWS bill." https://aws.amazon.com/blogs/compute/simple-two-way-messaging-using-the-amazon-sqs-temporary-queue-client/ C

A shopping cart application connects to an Amazon RDS Multi-AZ database instance. The database performance is causing the application to slow down. There was no significant performance improvement after upgrading to the next-generation instance type. According to the analysis, around 700 IOPS are maintained, typical queries execute for extended periods of time, and memory use is significant. Which application modification might a solutions architect propose to address these concerns? A. Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection. B. Separate the long-running queries into a new Multi-AZ RDS database and modify the application to query whichever database is needed. C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed. D. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query it first and query the database only if needed.

C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed. Answer is C-Elasticache. Elasticache helps reduce load off of databases for read intensive workloads NOTE-Using ElastiCache involves heavy application code changes A- Redshift is more for big data analytics/datawarehousing B-how will the application know which one to query C-fits well D-SQS is not used for querying,it is used to decouple applications

A business requires data storage on Amazon S3. A compliance requirement stipulates that when objects are modified, their original state must be retained. Additionally, data older than five years should be kept for auditing purposes. What SHOULD A SOLUTIONS ARCHITECT RECOMMEND AS THE MOST EFFORTABLE? A. Enable object-level versioning and S3 Object Lock in governance mode B. Enable object-level versioning and S3 Object Lock in compliance mode C. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive D. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA)

C. Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive Because Object Locks (A,B) only prevent deleting or overwriting but still accessible https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html Object lock is used with S3 versioning to prevent the deletion of versions.

On Amazon EC2 Linux instances, a business hosts a website. Several of the examples are malfunctioning. The troubleshooting indicates that the unsuccessful instances lack swap space. The operations team's lead need a monitoring solution for this. What recommendations should a solutions architect make? A. Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch. B. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch . C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch. D. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.

C. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch. Cloudwatch agent for swap, memory utilization monitoring. Default cant. Must be custom. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html

Using seven Amazon EC2 instances, a business runs its web application on AWS. The organization needs that DNS queries provide the IP addresses of all healthy EC2 instances. Which policy should be employed to comply with this stipulation? A. Simple routing policy B. Latency routing policy C. Multi-value routing policy D. Geolocation routing policy

C. Multi-value routing policy https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/: "Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing when you want to associate your routing records with a Route 53 health check."

On Amazon EC2 instances, a business is developing an application that creates transitory transactional data. Access to data storage that can deliver adjustable and consistent IOPS is required by the application. What recommendations should a solutions architect make? A. Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume. B. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume. C. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume. D. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.

C. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume.

Amazon Elastic Container Service (Amazon ECS) container instances are used to install an ecommerce website's web application behind an Application Load Balancer (ALB). The website slows down and availability is decreased during moments of heavy usage. A solutions architect utilizes Amazon CloudWatch alarms to be notified when an availability problem occurs, allowing them to scale out resources. The management of the business want a system that automatically reacts to such circumstances. Which solution satisfies these criteria? A. Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. B. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. C. Set up AWS Auto Scaling to scale out the ECS service when the serviceג€™s CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. D. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.

C. Set up AWS Auto Scaling to scale out the ECS service when the serviceג€™s CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Match deployed capacity to the incoming application load, using scaling policies for both the ECS service and the Auto Scaling group in which the ECS cluster runs. Scaling up cluster instances and service tasks when needed and safely scaling them down when demand subsides, keeps you out of the capacity guessing game. This provides you high availability with lowered costs in the long run. https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/

Within the same AWS account, a firm has two VPCs situated in the us-west-2 Region. The business must permit network communication between these VPCs. Each month, about 500 GB of data will be transferred between the VPCs. Which approach is the MOST cost-effective for connecting these VPCs? A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication. B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication. C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication. D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.

C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication. I agree with C. https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/transit-gateway-vs-vpc-peering.html Lower cost — With VPC peering you only pay for data transfer charges. Transit Gateway has an hourly charge per attachment in addition to the data transfer fees. I agree with answer C, because it needs to transfer data between VPCs not from on-premise, and I think with peering feature should be enough

A corporation connects its on-premises servers to AWS through a 10 Gbps AWS Direct Connect connection. The connection's workloads are crucial. The organization needs a catastrophe recovery approach that is as resilient as possible while minimizing the existing connection bandwidth. What recommendations should a solutions architect make? A. Set up a new Direct Connect connection in another AWS Region. B. Set up a new AWS managed VPN connection in another AWS Region. C. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region. D. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.

C. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region.

On a fleet of Amazon EC2 instances, a business provides a training site. The business predicts that when its new course, which includes hundreds of training videos on the web, is available in one week, it will be tremendously popular. What should a solutions architect do to ensure that the predicted server load is kept to a minimum? A. Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API. B. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume. C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. D. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.

C. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. When you first set up an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the files in your bucket. This allows anyone to access your files either through CloudFront or using the Amazon S3 URL. CloudFront doesn't expose Amazon S3 URLs, but your users might have those URLs if your application serves any files directly from Amazon S3 or if anyone gives out direct links to specific files in Amazon S3. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

Currently, a company's legacy application relies on an unencrypted single-instance Amazon RDS MySQL database. All current and new data in this database must be encrypted to comply with new compliance standards. How is this to be achieved? A. Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance. B. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to delete the original instance. C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot. D. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance.

C. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot. How do I encrypt Amazon RDS snapshots?The following steps are applicable to Amazon RDS for MySQL, Oracle, SQL Server, PostgreSQL, or MariaDB.Important: If you use Amazon Aurora, you can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster if you specify an AWS KeyManagement Service (AWS KMS) encryption key when you restore from the unencrypted DB cluster snapshot. For more information, see Limitations of AmazonRDS Encrypted DB Instances.Open the Amazon RDS console, and then choose Snapshots from the navigation pane.Select the snapshot that you want to encrypt.Under Snapshot Actions, choose Copy Snapshot.Choose your Destination Region, and then enter your New DB Snapshot Identifier.Change Enable Encryption to Yes.Select your Master Key from the list, and then choose Copy Snapshot.After the snapshot status is available, the Encrypted field will be True to indicate that the snapshot is encrypted.You now have an encrypted snapshot of your DB. You can use this encrypted DB snapshot to restore the DB instance from the DB snapshot.Reference:https://aws.amazon.com/premiumsupport/knowledge-center/encrypt-rds-snapshots/

A corporation uses an AWS application to offer content to its subscribers worldwide. Numerous Amazon EC2 instances are deployed on a private subnet behind an Application Load Balancer for the application (ALB). The chief information officer (CIO) wishes to limit access to some nations due to a recent change in copyright regulations. Which course of action will satisfy these criteria? A. Modify the ALB security group to deny incoming traffic from blocked countries. B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries. C. Use Amazon CloudFront to serve the application and deny access to blocked countries. D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.

C. Use Amazon CloudFront to serve the application and deny access to blocked countries. "block access for certain countries." You can use geo restriction, also known as geo blocking, to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront web distribution.Reference:https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

A business wishes to automate the evaluation of the security of its Amazon EC2 instances. The organization must verify and show that the development process adheres to security and compliance requirements. What actions should a solutions architect take to ensure that these criteria are met? A. Use Amazon Macie to automatically discover, classify and protect the EC2 instances. B. Use Amazon GuardDuty to publish Amazon Simple Notification Service (Amazon SNS) notifications. C. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications D. Use Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of AWS Trusted Advisor checks.

C. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications Guard Duty: Aim is to analyze logs: -CloudTrail Logs: unusual API calls, unauthorized deployments -VPC Flow Logs: unusual internal traffic, unusual IP address -DNS Logs: compromised EC2 instances sending encoded data within DNS queries Can protect against CryptoCurrency attacks (has a dedicated "finding" for it). It uses Machine Learning. Macie helps identify and alert you to sensitive data, such as personally identifiable information (PII). Applies only for S3. Inspector is specific to EC2. -Provides Automated Security Assessments for EC2 instances. -Requires agent installation on EC2 for Host(vulnerability assessment/best practices) OR can do NW Assessment for EC2 without installing agent 1) Macie : Checks data patterns in S3 ( using AI ) like PII or other sensitive information 2) Inspector : Checks what happens when you actually get an attack. ( this is useful for Assessment ) ; Pro-active 3) GuardDuty : Analyzes the actual events that happened in the AWS that it is running. ( Reactive ) 4) EventBridge : AWS Serverless Service that helps to build event-driven applications

A newly formed company developed a three-tiered web application. The front end is comprised entirely of static information. Microservices form the application layer. User data is kept in the form of JSON documents that must be accessible with a minimum of delay. The firm anticipates minimal regular traffic in the first year, with monthly traffic spikes. The startup team's operational overhead expenditures must be kept to a minimum. What should a solutions architect suggest as a means of achieving this? A. Use Amazon S3 static website hosting to store and serve the front end. Use AWS Elastic Beanstalk for the application layer. Use Amazon DynamoDB to store user data. B. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon Elastic KubernetesService (Amazon EKS) for the application layer. Use Amazon DynamoDB to store user data. C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data. D. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon RDS with read replicas to store user data.

C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data. Key: "The startup team needs to minimize operational overhead costs" Ref: https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/serverless-microservices.html "Figure 3 shows the architecture of a serverless microservice with AWS Lambda where the complete service is built out of managed services, which eliminates the architectural burden to design for scale and high availability and eliminates the operational efforts of running and monitoring the microservice's underlying infrastructure."

A business relies on Amazon S3 for object storage. The organization stores data in hundreds of S3 buckets. Certain S3 buckets contain less frequently accessed data than others. According to a solutions architect, lifecycle rules are either not followed consistently or are enforced in part, resulting in data being held in high-cost storage. Which option will reduce expenses without jeopardizing object availability? A. Use S3 ACLs. B. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots. C. Use S3 Intelligent-Tiering storage. D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA).

C. Use S3 Intelligent-Tiering storage. Will go with S3 Intelligent Tier because we are not sure about the access frequency. ANSWER=C S3 Intelligent-Tiering for data with unknown or changing access patterns. It is a new Amazon S3 storage class designed for customers who want to optimize storage costs automatically when data access patterns change, without performance impact or operational overhead. It is the first cloud object storage class that delivers automatic cost savings by moving data between two access tiers — frequent access and infrequent access — when access patterns change, and is ideal for data with unknown or changing access patterns.

A solutions architect is developing a daily data processing task that will take up to two hours to finish. If the task is stopped, it must be restarted from scratch. What is the MOST cost-effective way for the solutions architect to solve this issue? A. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job. B. Create an AWS Lambda function triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. D. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.

C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. A is wrong; "EC2 Reserved Instance" not cost effective compared to serverless B is wrong; Lambda runs for 15 minutes max D is wrong; "running on Amazon EC2" not cost effective https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. https://aws.amazon.com/eventbridge/ Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated Software-as-a-Service (SaaS) applications, and AWS services.

An online picture program enables users to upload photographs and modify them. The application provides two distinct service levels: free and paid. Paid users' photos are processed ahead of those submitted by free users. Amazon S3 is used to store the photos, while Amazon SQS is used to store the job information. How should a solutions architect propose a configuration? A. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first. B. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling. C. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. D. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.

C. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. C, check this out: https://acloud.guru/forums/guru-of-the-week/discussion/-L7Be8rOao3InQxdQcXj/

business does not currently have any file sharing services. A new project needs file storage that can be mounted as a disk for on-premises desktop computers. Before users can access the storage, the file server must authenticate them against an Active Directory domain. Which service enables Active Directory users to deploy storage on their workstations as a drive? A. Amazon S3 Glacier B. AWS DataSync C. AWS Snowball Edge D. AWS Storage Gateway

D. AWS Storage Gateway It's D. Text and Link below: "Before you create an SMB file share, make sure that you configure SMB security settings for your file gateway. You also configure either Microsoft Active Directory (AD) or guest access for authentication." https://docs.aws.amazon.com/storagegateway/latest/userguide/CreatingAnSMBFileShare.html

A business has launched a mobile multiplayer game. The game demands real-time monitoring of participants' latitude and longitude positions. The game's data storage must be capable of quick updates and location retrieval. The game stores location data on an Amazon RDS for PostgreSQL DB instance with read replicas. The database is unable to sustain the speed required for reading and writing changes during high use times. The game's user base is rapidly growing. What should a solutions architect do to optimize the data tier's performance? A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. B. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana. C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.

D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis. The answer is D Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis keywords: The game requires live location tracking of players based

A company's website stores transactional data on an Amazon RDS MySQL Multi-AZ DB instance. Other internal systems query this database instance to get data for batch processing. When internal systems request data from the RDS DB instance, the RDS DB instance drastically slows down. This has an adverse effect on the website's read and write performance, resulting in poor response times for users. Which approach will result in an increase in website performance? A. Use an RDS PostgreSQL DB instance instead of a MySQL database. B. Use Amazon ElastiCache to cache the query responses for the website. C. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance. D. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.

D. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica. Amazon RDS Read Replicas -Enhanced performance -You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Because read replicas can be promoted to master status, they are useful as part of a sharding implementation.To further maximize read performance, Amazon RDS for MySQL allows you to add table indexes directly to Read Replicas, without those indexes being present on the master.Reference:https://aws.amazon.com/rds/features/read-replicas Here the internal system fetch the data that mean it is performing only SELECT statement, Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)

A firm is developing a web application on AWS utilizing containers. At any one moment, the organization needs three instances of the web application to be running. The application must be scalable in order to keep up with demand increases. While management is cost-conscious, they agree that the application should be highly accessible.What recommendations should a solutions architect make? A. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal. B. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal. C. Add a resource-based policy to the function with lambda:ג€™* as the action and Service:events.amazonaws.com as the principal. D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

D. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

On a cluster of Amazon Linux EC2 instances, a business runs an application. The organization is required to store all application log files for seven years for compliance purposes.The log files will be evaluated by a reporting program, which will need concurrent access to all files. Which storage system best satisfies these criteria in terms of cost-effectiveness? A. Amazon Elastic Block Store (Amazon EBS) B. Amazon Elastic File System (Amazon EFS) C. Amazon EC2 instance store D. Amazon S3

D. Amazon S3 Amazon S3 -Requests to Amazon S3 can be authenticated or anonymous. Authenticated access requires credentials that AWS can use to authenticate your requests. When making REST API calls directly from your code, you create a signature using valid credentials and include the signature in your request. Amazon Simple StorageService (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999%(11 9's) of durability, and stores data for millions of applications for companies all around the world.Reference:https://aws.amazon.com/s3/ instance store-used for temporary data which gets lost on stop of ec2 instance EBS-cannot be accessed concurrently(unless using io1/io2 ) EFS is suited for Linux workloads which need to share same data. Confused with S3, it also allows concurrent access. Since aim is not specifically adhering to filesystem(log files can easily be stored in S3), I would go with S3 considering costs(EFS is VERY expensive) NOTE: Both S3 and EFS( Standard,EFS-IA) have tiering options

A business is developing a web-based application that will operate on Amazon EC2 instances distributed across several Availability Zones. The online application will enable access to a collection of over 900 TB of text content. The corporation expects times of heavy demand for the online application. A solutions architect must guarantee that the text document storage component can scale to meet the application's demand at all times. The corporation is concerned about the solution's total cost. Which storage system best satisfies these criteria in terms of cost-effectiveness? A. Amazon Elastic Block Store (Amazon EBS) B. Amazon Elastic File System (Amazon EFS) C. Amazon Elasticsearch Service (Amazon ES) D. Amazon S3

D. Amazon S3 NOTE THE KEY WORDS HERE : "repository of text " ; "storage component for the text documents can scale to meet the demand of the application at all times" - "MOST cost-effectively" -- First C is out of question as they are talking about a Storage Component and Elasticsearch Service is not a Storage Component - A ( Amazon EBS ) is out of question since they are talking about - Most Cost-effective So now, we have two options left ( B & D ) Now some people argued that it should be B because of the first line that says " A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones" ....so , at first I also was inclined towards answer B ( i.e EFS Storage ) but then I read it again and then I saw this word " repository " and "Most Cost Effective" ....which makes me to finally go with answer D ( S3 Bucket )

On Amazon EC2, a corporation is operating a highly secure application that is backed up by an Amazon RDS database. All personally identifiable information (PII) must be encrypted at rest to comply with compliance standards. Which solution should a solutions architect propose in order to achieve this need with the MINIMUM number of infrastructure changes? A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume. B. Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes. C. Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes. D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes. D seems to the right option as it will encrypt both the EC2 EBS volume and also the RDS database.

Application developers have found that when business reporting users run big production reports to the Amazon RDS instance that powers the application, the application becomes very sluggish. While the reporting queries are executing, the RDS instance's CPU and memory usage metrics do not surpass 60%. Business reporting users must be able to produce reports without impairing the functionality of the application. Which action is necessary to achieve this? A. Increase the size of the RDS instance. B. Create a read replica and connect the application to it. C. Enable multiple Availability Zones on the RDS instance. D. Create a read replica and connect the business reports to it.

D. Create a read replica and connect the business reports to it. "Business reporting or data warehousing scenarios where you might want business reporting queries to run against a read replica, rather than your production DB instance." the above paragraph can be found her: https://docs.aws.amazon.com/en_en/AmazonRDS/latest/UserGuide/USER_ReadRepl.html the answer is D

A business is operating a worldwide application. Users upload various videos, which are subsequently combined into a single video file. The program receives uploads from users through a single Amazon S3 bucket in the us-east-1 Region. The same S3 bucket also serves as the download point for the generated video file. The finished video file is around 250 GB in size. The organization requires a solution that enables quicker uploads and downloads of video files stored in Amazon S2. The corporation will charge consumers who choose to pay for the faster speed a monthly fee. What actions should a solutions architect take to ensure that these criteria are met? A. Enable AWS Global Accelerator for the S3 endpoint. Adjust the applicationג€™s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription. B. Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription. C. Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription. D. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucketג€™s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.

D. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucketג€™s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription. D is correct When you create a CloudFront distribution with an origin pointing to your S3 bucket, you enable caching on Edge locations. Consequent requests to the same objects will be served from the Edge cache which is faster for the end user and also reduces the load on your origin. CloudFront is primarily used as a content delivery service. When you enable S3 Transfer Acceleration for your S3 bucket and use <bucket>.s3-accelerate.amazonaws.com instead of the default S3 endpoint, the transfers are performed via the same Edge locations, but the network path is optimized for long-distance large-object uploads. Extra resources and optimizations are used to achieve higher throughput. No caching on Edge locations.

Previously, a corporation moved their data warehousing solution to AWS. Additionally, the firm has an AWS Direct Connect connection. Through the use of a visualization tool, users in the corporate office may query the data warehouse. Each query answered by the data warehouse is on average 50 MB in size, whereas each webpage supplied by the visualization tool is around 500 KB in size. The data warehouse does not cache the result sets it returns. Which approach results in the LOWEST OUTGOING DATA TRANSFER COSTS FOR THE COMPANY? A. Host the visualization tool on premises and query the data warehouse directly over the internet. B. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet. C. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region. D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region.

D. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region. "Data transfer pricing over Direct Connect is lower than data transfer pricing over the internet" A and B are out I would take D over C as transfer from AWS to on-premises would cost more than transfer from AWS to AWS

A company's on-premises infrastructure and AWS need a secure connection. This connection does not need a large quantity of bandwidth and is capable of handling a limited amount of traffic. The link should be established immediately. Which way is the MOST CHEAPEST for establishing this sort of connection? A. Implement a client VPN. B. Implement AWS Direct Connect. C. Implement a bastion host on Amazon EC2. D. Implement an AWS Site-to-Site VPN connection.

D. Implement an AWS Site-to-Site VPN connection. Answer D They are talking about connection between on-prem environment and AWS. So not client connections. So this has to be s2s VPN.

A business uses Amazon EC2 instances to operate an API-based inventory reporting application. The program makes use of an Amazon DynamoDB database to store data. The distribution centers of the corporation use an on-premises shipping application that communicates with an API to update inventory prior to generating shipping labels. Each day, the organization has seen application outages, resulting in missed transactions. What should a solutions architect propose to increase the resilience of an application? A. Modify the shipping application to write to a local database. B. Modify the application APIs to run serverless using AWS Lambda C. Configure Amazon API Gateway to call the EC2 inventory application APIs. D. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).

D. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).

A business operates an application that collects data from its consumers through various Amazon EC2 instances. After processing, the data is uploaded to Amazon S3 for long-term storage. A study of the application reveals that the EC2 instances were inactive for extended periods of time. A solutions architect must provide a system that maximizes usage while minimizing expenditures. Which solution satisfies these criteria? A. Use Amazon EC2 in an Auto Scaling group with On-Demand instances. B. Build the application to use Amazon Lightsail with On-Demand Instances. C. Create an Amazon CloudWatch cron job to automatically stop the EC2 instances when there is no activity. D. Redesign the application to use an event-driven design with Amazon Simple Queue Service (Amazon SQS) and AWS Lambda.

D. Redesign the application to use an event-driven design with Amazon Simple Queue Service (Amazon SQS) and AWS Lambda. Lamda and SQS are cheapest. With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. For SQS, First 1 Million Requests/Month Free Free Key word is that the instances aren't being used, as in none and not some. So, Lambda would be the best since the pricing is by usage.

A business is prepared to use Amazon S3 to store sensitive data. Data must be encrypted at rest for compliance purposes. Auditing of encryption key use is required. Each year, keys must be rotated. Which solution satisfies these parameters and is the MOST OPTIMAL in terms of operational efficiency? A. Server-side encryption with customer-provided keys (SSE-C) B. Server-side encryption with Amazon S3 managed keys (SSE-S3) C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation

D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use. AWS KMS supports optional automatic key rotation only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter. https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html

To facilitate experimentation and agility, a business enables developers to link current IAM policies to existing IAM roles. The security operations team, on the other hand, is worried that the developers may attach the current administrator policy, allowing them to bypass any other security rules. What approach should a solutions architect use in dealing with this issue? A. Create an Amazon SNS topic to send an alert every time a developer creates a new policy. B. Use service control policies to disable IAM activity across all account in the organizational unit. C. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team. D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.

D. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy. As We have to address the problem in the question and the Security operations team is concerned only about administrator policies. Correct answer is D as IAM permissions boundary on developer role would help limit maximum permissions that an identity-based policy can grant.

Each month, a leasing firm prepares and delivers PDF statements to all of its clients. Each statement is around 400 KB in length. Customers may obtain their statements from the website for a period of up to 30 days after they are created. Customers are sent a ZIP file containing all of their statements at the conclusion of their three-year lease. Which storage method is the MOST cost-effective in this situation? A. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. B. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. C. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.

D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days. S3 Standard-IA to Glacier would be the cost effective option in this case Answer D is most cost effective. Objects can be uploaded directly to to S3-IA and then can be transitioned to S3 Glacier after 30 days. Minimum storage days refers to time after which if objects are stored, there are no additional fees. if I have a new object, I am good to move it to S3 Glacier immediately. Only commitment is 90 days.Which means if I delete the object tomorrow,I am still charged for 90 days. Now, A is incorrect because moving to Glacier within a day would mean lot of wait time for retrieval+ we need to initiate a manual restore process/requires intervention. (customers need it for 30 days) B is again incorrect for same reason, it stores in Glacier from first day,meaning cx cannot directly retrieve it without the restore process. Now,remember that S3 Standard,Intelligent do not have costs for retrieval. And all of S3 Standard, Intelligent, IA, Onezone IA have immediate retrieval(unlike Glacier and Deep Archive). Looking at C carefully, it infact reduces redundancy by going one zone.Moreover, 3 years implies using Glacier/Deep Archive for max cost benefits. D fits as cx can immediately get files from S3 IA, though there will be some retrieval cost(perhaps offset by the fact that overall storage cost is still low and not all users obtain the statement).Further, Glacier over 3 years gives us lower cost compared to C. Further,storing in IA for 30 days is completely justified as we pay for 30 days then no penalty for moving into Glacier.

A corporation is using AWS to construct a new machine learning model solution. The models are constructed as self-contained microservices that get around 1 GB of model data from Amazon S3 and put it into memory during startup. The models are accessed by users through an asynchronous API. Users may submit a single request or a batch of requests and designate the destination for the results.Hundreds of people benefit from the company's models. The models' use habits are erratic. Certain models may go days or weeks without being used. Other models may get hundreds of queries concurrently. Which solution satisfies these criteria? A. The requests from the API are sent to an Application Load Balancer (ALB). Models are deployed as AWS Lambda functions invoked by the ALB. B. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size. C. The requests from the API are sent to the modelג€™s Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size. D. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size.

D. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size. In this case, looking at various options. A. This will work but there is no reliability as message can be lost. B. Lambda is managed service that adjust itself with the load, but you cant use autoscaling. C. Not sure about App Mesh, which is a service that provides application level networking. D in this case works, without a doubt. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. - Valid and will work Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue - Make sense AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size - This makes absolute sense. D is the answer, Lambda can't process due to the model size(1GB), directory memory allocation is 512MB.

A business intends to use AWS to host a survey website. The firm anticipated a high volume of traffic. As a consequence of this traffic, the database is updated asynchronously. The organization want to avoid dropping writes to the database housed on AWS. How should the business's application be written to handle these database requests? A. Configure the application to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the database to the SNS topic. B. Configure the application to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Publish the database updates to the SNS topic. C. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the database connection until the database has resources to write the data. D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database.

D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database. SNS can't publish to databases so A and B are wrong D is the better option than C since it ensures that the writes are made

A business uses AWS to host its website. The organization has utilized Amazon EC2 Auto Scaling to accommodate the extremely fluctuating demand. Management is worried that the firm is overprovisioning its infrastructure, particularly at the three-tier application's front end. A solutions architect's primary responsibility is to guarantee that costs are minimized without sacrificing performance. What is the solution architect's role in achieving this? A. Use Auto Scaling with Reserved Instances. B. Use Auto Scaling with a scheduled scaling policy. C. Use Auto Scaling with the suspend-resume feature. D. Use Auto Scaling with a target tracking scaling policy.

D. Use Auto Scaling with a target tracking scaling policy. D is correct due to variable demand. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html

On a fleet of Amazon EC2 instances, a business runs a production application. The program takes data from an Amazon SQS queue and concurrently processes the messages. The message volume is variable, and traffic is often interrupted. This program should handle messages continuously and without interruption. Which option best fits these criteria in terms of cost-effectiveness? A. Use Spot Instances exclusively to handle the maximum capacity required. B. Use Reserved Instances exclusively to handle the maximum capacity required . C. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.

D. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity. Answer is D because you CAN NOT use spot instances for ' unpredictability '. good job.

Multiple Amazon EC2 instances are used to host an application. The program reads messages from an Amazon SQS queue, writes them to an Amazon RDS database, and then removes them from the queue. The RDS table sometimes contains duplicate entries. There are no duplicate messages in the SQS queue. How can a solutions architect guarantee that messages are handled just once? A. Use the CreateQueue API call to create a new queue. B. Use the AddPermission API call to add appropriate permissions. C. Use the ReceiveMessage API call to set an appropriate wait time. D. Use the ChangeMessageVisibility API call to increase the visibility timeout.

D. Use the ChangeMessageVisibility API call to increase the visibility timeout. The visibility timeout begins when Amazon SQS returns a message. During this time, the consumer processes and deletes the message. However, if the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers and the message is received again. If a message must be received only once, your consumer should delete it within the duration of the visibility timeout. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html

What is the policy's net effect? A. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. B. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled. C. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. D. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.

D. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.

A business is developing an application that is composed of many microservices. The organization has chosen to deploy its software on AWS through container technology. The business need a solution that requires little ongoing work for maintenance and growth. Additional infrastructure cannot be managed by the business. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) A. Deploy an Amazon Elastic Container Service (Amazon ECS) cluster. B. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones. C. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2. D. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2. E. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.

It should be A and D. The question repeatedly says managing infrastructure must not be an option so EC2 is off the topic. Also can user fargate with micro services without any issue. (https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-java-microservices-on-amazon-ecs-using-aws-fargate.html)

AWS hosts a company's near-real-time streaming application. While the data is being ingested, a job is being performed on it that takes 30 minutes to finish. Due to the massive volume of incoming data, the workload regularly faces significant latency. To optimize performance, a solutions architect must build a scalable and serverless system. Which actions should the solutions architect do in combination? (Select two.) A. Use Amazon Kinesis Data Firehose to ingest the data. B. Use AWS Lambda with AWS Step Functions to process the data. C. Use AWS Database Migration Service (AWS DMS) to ingest the data. D. Use Amazon EC2 instances in an Auto Scaling group to process the data. E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.

The solution is A and E There are 2 ingestion and 3 processor Since the near real-time we choose Firehose - A - First step We are left with processor, B , D and E We know lambda can run max for 15 min and the job is of 30 min so lambda is out. https://aws.amazon.com/lambda/faqs/#:~:text=AWS%20Lambda%20functions%20can%20be,1%20second%20and%2015%20minutes. We are left with D and E Both will work but the question specifies serverless hence E - step 2 https://aws.amazon.com/fargate/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&fargate-blogs.sort-by=item.additionalFields.createdDate&fargate-blogs.sort-order=desc So A and E is the solution


Kaugnay na mga set ng pag-aaral

Microeconomics Ch 18 - The Economics of the Welfare State

View Set

BAS 283: Chapter 8: Organizational Culture, Structure, and Design: Building Blocks of the...: SmartBook

View Set

personal finance virginia tech - final

View Set