Practice Exam #1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A Solutions Architect needs to set up a relational database and come up with a disaster recovery plan to mitigate multi-region failure. The solution requires a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute. Which of the following AWS services can fulfill this requirement? A) Amazon RDS for PostgreSQL with cross-region read replicas B) AWS Global Accelerator C) Amazon DynamoDB global tables D) Amazon Aurora Global Database

D) Amazon Aurora Global Database

A tech company that you are working for has undertaken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for the storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates a single sign-on feature from your corporate AD or LDAP directory and also restricts access for each individual user to a designated user folder in an S3 bucket? (Select TWO.) A) Use 3rd party Single Sign-On solutions such as Atlassian Crowd, OKTA, OneLogin and many others. B) Set up a matching IAM user for each of the 1200 users in your corporate directory that needs access to a folder in the S3 bucket. C) Map each individual user to a designated user folder in S3 using Amazon WorkDocs to access their personal documents. D) Configure an IAM role and an IAM Policy to access the bucket. E) Set up a Federation proxy or an Identity provider, and use AWS Security Token Service to generate temporary tokens.

D) Configure an IAM role and an IAM Policy to access the bucket. E) Set up a Federation proxy or an Identity provider, and use AWS Security Token Service to generate temporary tokens.

A Solutions Architect is hosting a website in an Amazon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://tutorialsdojo.s3-website-us-east-1.amazonaws.com and there is a new requirement to add a JavaScript on the webpages in order to make authenticated HTTP GET requests against the same bucket by using the Amazon S3 API endpoint (tutorialsdojo.s3.amazonaws.com). Upon testing, you noticed that the web browser blocks JavaScript from allowing those requests. Which of the following options is the MOST suitable solution that you should implement for this scenario? A) Enable Cross-Region Replication (CRR). B) Enable cross-account access. C) Enable Cross-Zone Load Balancing. D) Enable Cross-origin resource sharing (CORS) configuration in the bucket.

D) Enable Cross-origin resource sharing (CORS) configuration in the bucket.

A popular social network is hosted in AWS and is using a DynamoDB table as its database. There is a requirement to implement a 'follow' feature where users can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution that you should implement to meet the requirement? A) Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS. B) Set up a DAX cluster to access the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every update made in the user data, the trigger will send data to the Lambda function which will then notify the subscribers via email using SNS. C) Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via email when there is an update made by a particular user. D) Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email.

D) Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email.

Question 28: A company has 3 DevOps engineers that are handling its software development and infrastructure management processes. One of the engineers accidentally deleted a file hosted in Amazon S3 which has caused disruption of service. What can the DevOps engineers do to prevent this from happening again? A) Set up a signed URL for all users. B) Create an IAM bucket policy that disables delete operation. C) Use S3 Infrequently Accessed storage to store the data. D) Enable S3 Versioning and Multi-Factor Authentication Delete on the bucket.

D) Enable S3 Versioning and Multi-Factor Authentication Delete on the bucket.

Question 22: A cryptocurrency trading platform is using an API built in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price surge of Bitcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend systems of the platform from traffic spikes? A) Move the Lambda function in a VPC. B) Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling. C) Use CloudFront in front of the API Gateway to act as a cache. D) Enable throttling limits and result caching in API Gateway.

D) Enable throttling limits and result caching in API Gateway.

Question 27: Skipped A company has a hybrid cloud architecture that connects their on-premises data center and cloud infrastructure in AWS. They require a durable storage backup for their corporate documents stored on-premises and a local cache that provides low latency access to their recently accessed data to reduce data egress charges. The documents must be stored to and retrieved from AWS via the Server Message Block (SMB) protocol. These files must immediately be accessible within minutes for six months and archived for another decade to meet the data compliance. Which of the following is the best and most cost-effective approach to implement in this scenario? A) Establish a Direct Connect connection to integrate your on-premises network to your VPC. Upload the documents on Amazon EBS Volumes and use a lifecycle policy to automatically move the EBS snapshots to an S3 bucket, and then later to Glacier for archival. B) Launch a new tape gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the tape gateway and set up a lifecycle policy to move the data into Glacier for archival. C) Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the documents to an S3 bucket and set up a lifecycle policy to move the data into Glacier for archival. D) Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier for data archival.

D) Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier for data archival.

Question 29: Skipped In a government agency that you are working for, you have been assigned to put confidential tax documents on AWS cloud. However, there is a concern from a security perspective on what can be put on AWS. What are the features in AWS that can ensure data security for your confidential documents? (Select TWO.) A) Public Data Set Volume Encryption B) EBS On-Premises Data Encryption C) S3 On-Premises Data Encryption D) S3 Client-Side Encryption E) S3 Server-Side Encryption

D) S3 Client-Side Encryption E) S3 Server-Side Encryption

A company plans to host a web application in an Auto Scaling group of Amazon EC2 instances. The application will be used globally by users to upload and store several types of files. Based on user trends, files that are older than 2 years must be stored in a different storage class. The Solutions Architect of the company needs to create a cost-effective and scalable solution to store the old files yet still provide durability and high availability. Which of the following approach can be used to fulfill this requirement? (Select TWO.) A) Use Amazon EBS volumes to store the files. Configure the Amazon Data Lifecycle Manager (DLM) to schedule snapshots of the volumes after 2 years B) Use Amazon EFS and create a lifecycle policy that will move the object to Amazon EFS-IA after 2 years C) Use a RAID 0 storage configuration that stripes multiple Amazon EBS volumes together to store the files. Configure the Amazon DLM to schedule snapshots of the volumes after 2 years D) Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA after 2 years E) Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years

D) Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA after 2 years E) Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years

A company plans to launch an Amazon EC2 instance in a private subnet for its internal corporate web portal. For security purposes, the EC2 instance must send data to Amazon DynamoDB and Amazon S3 via private endpoints that don't pass through the public Internet. Which of the following can meet the above requirements? A) Use AWS Direct Connect to route all access to S3 and DynamoDB via private endpoints. B)Use AWS Transit Gateway to route all access to S3 and DynamoDB via private endpoints. C) Use AWS VPN CloudHub to route all access to S3 and DynamoDB via private endpoints. D) Use VPC endpoints to route all access to S3 and DynamoDB via private endpoints.

D) Use VPC endpoints to route all access to S3 and DynamoDB via private endpoints.

Question 30: A media company has an Amazon ECS Cluster, which uses the Fargate launch type, to host its news website. The database credentials should be supplied using environment variables, to comply with strict security compliance. As the Solutions Architect, you have to ensure that the credentials are secure and that they cannot be viewed in plaintext on the cluster itself. Which of the following is the most suitable solution in this scenario that you can implement with minimal effort? A) In the ECS task definition file of the ECS Cluster, store the database credentials using Docker Secrets to centrally manage these sensitive data and securely transmit it to only those containers that need access to it. Secrets are encrypted during transit and at rest. A given secret is only accessible to those services which have been granted explicit access to it via IAM Role, and only while those service tasks are running. B )Store the database credentials in the ECS task definition file of the ECS Cluster and encrypt it with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definition script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the database credentials. C) Use the AWS Secrets Manager to store the database credentials and then encrypt them using AWS KMS. Create a resource-based policy for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Secrets Manager secret which contains the sensitive data, to present to the container. D)Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.

D)Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.

Question 42: Skipped A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances. The application is extensively used during office hours from 9 in the morning till 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours. Which of the following can be done to ensure that the application works properly at the beginning of the day? A. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization. B. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization. C. Set up an Application Load Balancer (ALB) to your architecture to ensure that the traffic is properly distributed on the instances. D. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.

D. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.

Question 55: A travel photo sharing website is using Amazon S3 to serve high-quality photos to visitors of your website. After a few days, you found out that there are other travel websites linking and using your photos. This resulted in financial losses for your business. What is the MOST effective method to mitigate this issue? A. Use CloudFront distributions for your photos. B. Store and privately serve the high-quality photos on Amazon WorkDocs instead. C. Block the IP addresses of the offending websites using NACL. D. Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates.

D. Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates.

Question 32: An online shopping platform is hosted on an Auto Scaling group of Spot EC2 instances and uses Amazon Aurora PostgreSQL as its database. There is a requirement to optimize your database workloads in your cluster where you have to direct the write operations of the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances. Which is the most suitable configuration for your application as well as your Aurora database cluster to achieve this requirement? A. Do nothing since by default, Aurora will automatically direct the production traffic to your high-capacity instances and the reporting queries to your low-capacity instances. B. Configure your application to use the reader endpoint for both production traffic and reporting queries, which will enable your Aurora database to automatically perform load-balancing among all the Aurora Replicas. C. In your application, use the instance endpoint of your Aurora database to handle the incoming production traffic and use the cluster endpoint to handle reporting queries. D. Create a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries.

D. Create a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries.

Question 60: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement? A. Launch an Oracle Real Application Clusters (RAC) in RDS. B. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance. C. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled. D. Create an Oracle database in RDS with Multi-AZ deployments.

D. Create an Oracle database in RDS with Multi-AZ deployments.

Question 48: There was an incident in your production environment where the user data stored in the S3 bucket has been accidentally deleted by one of the Junior DevOps Engineers. The issue was escalated to your manager and after a few days, you were instructed to improve the security and protection of your AWS resources. What combination of the following options will protect the S3 objects in your bucket from both accidental deletion and overwriting? (Select TWO.) A. Enable Amazon S3 Intelligent-Tiering B. Disallow S3 Delete using an IAM bucket policy C. Provide access to S3 data strictly through pre-signed URL only D. Enable Versioning E. Enable Multi-Factor Authentication Delete

D. Enable Versioning E. Enable Multi-Factor Authentication Delete

Question 52: A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token. As the Solutions Architect of the company, which of the following should you do to meet the above requirement? A. Configure SSL in your application to encrypt the database connection to RDS. B. Use a combination of IAM and STS to restrict access to your RDS instance via a temporary token. C. Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance. D. Enable the IAM DB Authentication.

D. Enable the IAM DB Authentication.

Question 41: There are a lot of outages in the Availability Zone of your RDS database instance to the point that you have lost access to the database. What could you do to prevent losing access to your database in case that this event happens again? A. Increase the database instance size B. Create a read replica C. Make a snapshot of the database D. Enabled Multi-AZ failover

D. Enabled Multi-AZ failover

Question 53: A pharmaceutical company has resources hosted on both their on-premises network and in AWS cloud. They want all of their Software Architects to access resources on both environments using their on-premises credentials, which is stored in Active Directory. In this scenario, which of the following can be used to fulfill this requirement? A. Use Amazon VPC B. Set up SAML 2.0-Based Federation by using a Web Identity Federation. C. Use IAM users D. Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service (AD FS).

D. Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service (AD FS).

Question 36: A Solutions Architect is working for a company which has multiple VPCs in various AWS regions. The Architect is assigned to set up a logging system which will track all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solution must ensure the security, integrity, and durability of the log data. It should also provide an event history of all API calls made in AWS Management Console and AWS CLI. Which of the following solutions is the best fit for this scenario? A. Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies. B. Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies. C. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --no-include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies. D. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

D. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

Question 37: A government entity is conducting a population and housing census in the city. Each household information uploaded on their online portal is stored in encrypted files in Amazon S3. The government assigned its Solutions Architect to set compliance policies that verify sensitive data in a manner that meets their compliance standards. They should also be alerted if there are compromised files detected containing personally identifiable information (PII), protected health information (PHI) or intellectual properties (IP). Which of the following should the Architect implement to satisfy this requirement? A. Set up and configure Amazon Inspector to send out alert notifications whenever a security violation is detected on their Amazon S3 data. B. Set up and configure Amazon GuardDuty to monitor malicious activity on their Amazon S3 data. C. Set up and configure Amazon Rekognition to monitor and recognize patterns on their Amazon S3 data. D. Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data.

D. Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data.

Question 24: A company hosted an e-commerce website on an Auto Scaling group of EC2 instances behind an Application Load Balancer. The Solutions Architect noticed that the website is receiving a large number of illegitimate external requests from multiple systems with IP addresses that constantly change. To resolve the performance issues, the Solutions Architect must implement a solution that would block the illegitimate requests with minimal impact on legitimate traffic. Which of the following options fulfills this requirement? A) Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer. B) Create a custom network ACL and associate it with the subnet of the Application Load Balancer to block the offending requests. C) Create a regular rule in AWS WAF and associate the web ACL to an Application Load Balancer. D )Create a custom rule in the security group of the Application Load Balancer to block the offending requests.

A) Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.

An IT consultant is working for a large financial company. The role of the consultant is to help the development team build a highly available web application using stateless web servers. In this scenario, which AWS services are suitable for storing session state data? (Select TWO.) A) DynamoDB B) RDS C) ElastiCache D) Glacier E) Redshift Spectrum

A) DynamoDB C) ElastiCache

Question 3: A retail website has intermittent, sporadic, and unpredictable transactional workloads throughout the day that are hard to predict. The website is currently hosted on-premises and is slated to be migrated to AWS. A new relational database is needed that autoscales capacity to meet the needs of the application's peak load and scales back down when the surge of activity is over. Which of the following option is the MOST cost-effective and suitable database setup in this scenario? A) Launch an Amazon Aurora Serverless DB cluster then set the minimum and maximum capacity for the cluster. B) Launch an Amazon Aurora Provisioned DB cluster with burstable performance DB instance class types. C) Launch an Amazon Redshift data warehouse cluster with Concurrency Scaling. D) Launch a DynamoDB Global table with Auto Scaling enabled.

A) Launch an Amazon Aurora Serverless DB cluster then set the minimum and maximum capacity for the cluster.

Question 25: Skipped An application is hosted in an AWS Fargate cluster that runs a batch job whenever an object is loaded on an Amazon S3 bucket. The minimum number of ECS Tasks is initially set to 1 to save on costs, and it will only increase the task count based on the new objects uploaded on the S3 bucket. Once processing is done, the bucket becomes empty and the ECS Task count should be back to 1. Which is the most suitable option to implement with the LEAST amount of effort? A) Set up a CloudWatch Event rule to detect S3 object PUT operations and set the target to the ECS cluster with the increased number of tasks. Create another rule to detect S3 DELETE operations and set the target to the ECS Cluster with 1 as the Task count. B) Set up an alarm in CloudWatch to monitor CloudTrail since the S3 object-level operations are recorded on CloudTrail. Create two Lambda functions for increasing/decreasing the ECS task count. Set these as respective targets for the CloudWatch Alarm depending on the S3 event. C) Set up an alarm in CloudWatch to monitor CloudTrail since this S3 object-level operations are recorded on CloudTrail. Set two alarm actions to update ECS task count to scale-out/scale-in depending on the S3 event. D) Set up a CloudWatch Event rule to detect S3 object PUT operations and set the target to a Lambda function that will run Amazon ECS API command to increase the number of tasks on ECS. Create another rule to detect S3 DELETE operations and run the Lambda function to reduce the number of ECS tasks.

A) Set up a CloudWatch Event rule to detect S3 object PUT operations and set the target to the ECS cluster with the increased number of tasks. Create another rule to detect S3 DELETE operations and set the target to the ECS Cluster with 1 as the Task count.

Question 2: Correct A Docker application, which is running on an Amazon ECS cluster behind a load balancer, is heavily using DynamoDB. You are instructed to improve the database performance by distributing the workload evenly and using the provisioned throughput efficiently. Which of the following would you consider to implement for your DynamoDB table? A) Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item. B) Avoid using a composite primary key, which is composed of a partition key and a sort key. C) Use partition keys with low-cardinality attributes, which have a few number of distinct values for each item. D)Reduce the number of partition keys in the DynamoDB table.

A) Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item.

A global IT company with offices around the world has multiple AWS accounts. To improve efficiency and drive costs down, the Chief Information Officer (CIO) wants to set up a solution that centrally manages their AWS resources. This will allow them to procure AWS resources centrally and share resources such as AWS Transit Gateways, AWS License Manager configurations, or Amazon Route 53 Resolver rules across their various accounts. As the Solutions Architect, which combination of options should you implement in this scenario? (Select TWO.) A) Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. B) Use AWS Control Tower to easily and securely share your resources with your AWS accounts. C) Consolidate all of the company's accounts using AWS Organizations. D ) Consolidate all of the company's accounts using AWS ParallelCluster. E) Use the AWS Identity and Access Management service to set up cross-account access that will easily and securely share your resources with your AWS accounts.

A) Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. C) Consolidate all of the company's accounts using AWS Organizations.

A newly hired Solutions Architect is assigned to manage a set of CloudFormation templates that are used in the company's cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "*" }, { "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::boracay/*" } ] } What does the above IAM policy allow? (Select THREE.) A. An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account. B. An IAM user with this IAM policy is allowed to write objects into the boracay S3 bucket. C. An IAM user with this IAM policy is allowed to read and delete objects from the boracay S3 bucket. D. An IAM user with this IAM policy is allowed to read objects from the boracay S3 bucket. D. An IAM user with this IAM policy is allowed to change access rights for the boracay S3 bucket. E. An IAM user with this IAM policy is allowed to read objects in the boracay S3 bucket but not allowed to list the objects in the bucket.

A. An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account. B. An IAM user with this IAM policy is allowed to write objects into the boracay S3 bucket. D. An IAM user with this IAM policy is allowed to read objects from the boracay S3 bucket.

Question 40: An organization needs a persistent block storage volume that will be used for mission-critical workloads. The backup data will be stored in an object storage service and after 30 days, the data will be stored in a data archiving storage service. What should you do to meet the above requirement? A. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 Glacier. B. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 One Zone-IA. C. Attach an instance store volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 One Zone-IA. D. Attach an instance store volume in your existing EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 Glacier.

A. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 Glacier.

Question 49: A company plans to build a data analytics application in AWS which will be deployed in an Auto Scaling group of On-Demand EC2 instances and a MongoDB database. It is expected that the database will have high-throughput workloads performing small, random I/O operations. As the Solutions Architect, you are required to properly set up and launch the required resources in AWS. Which of the following is the most suitable EBS type to use for your database? A. Provisioned IOPS SSD (io1) B. General Purpose SSD (gp2) C. Cold HDD (sc1) D. Throughput Optimized HDD (st1)

A. Provisioned IOPS SSD (io1)

A company has a web application that uses Internet Information Services (IIS) for Windows Server. A file share is used to store the application data on the network-attached storage of the company's on-premises data center. To achieve a highly available system, they plan to migrate the application and file share to AWS. Which of the following can be used to fulfill this requirement? A) Migrate the existing file share configuration to Amazon EBS. B) Migrate the existing file share configuration to Amazon FSx for Windows File Server. C) Migrate the existing file share configuration to AWS Storage Gateway. D) Migrate the existing file share configuration to Amazon EFS.

B) Migrate the existing file share configuration to Amazon FSx for Windows File Server.

Question 26: A company hosts multiple applications in their VPC. While monitoring the system, they noticed that multiple port scans are coming in from a specific IP address block that is trying to connect to several AWS resources inside their VPC. The internal security team has requested that all offending IP addresses be denied for the next 24 hours for security purposes. Which of the following is the best method to quickly and temporarily deny access from the specified IP addresses? A) Create a policy in IAM to deny access from the IP Address block. B) Modify the Network Access Control List associated with all public subnets in the VPC to deny access from the IP Address block. C) Add a rule in the Security Group of the EC2 instances to deny access from the IP Address block. D) Configure the firewall in the operating system of the EC2 instances to deny access from the IP address block.

B) Modify the Network Access Control List associated with all public subnets in the VPC to deny access from the IP Address block.

A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user's login time to further optimize the system. Which of the following options should you use together to set up a cost-effective solution that can improve your application's performance? (Select TWO.) A )Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service. B) Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses. C) Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. D) Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user. E) Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution.

B) Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses. C) Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users.

Question 23: An online medical system hosted in AWS stores sensitive Personally Identifiable Information (PII) of the users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company. Which S3 encryption technique should the Architect use? A) Use S3 server-side encryption with a KMS managed key. B) Use S3 client-side encryption with a client-side master key. C) Use S3 client-side encryption with a KMS-managed customer master key. D) Use S3 server-side encryption with customer provided key.

B) Use S3 client-side encryption with a client-side master key.

A company is using a combination of API Gateway and Lambda for the web services of the online web portal that is being accessed by hundreds of thousands of clients each day. They will be announcing a new revolutionary product and it is expected that the web portal will receive a massive number of visitors all around the globe. How can you protect the backend systems and applications from traffic spikes? A) Manually upgrade the EC2 instances being used by API Gateway B)Use throttling limits in API Gateway C) API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything. D) Deploy Multi-AZ in API Gateway with Read Replica

B)Use throttling limits in API Gateway

Question 33: A company is in the process of migrating their applications to AWS. One of their systems requires a database that can scale globally and handle frequent schema changes. The application should not have any downtime or performance issues whenever there is a schema change in the database. It should also provide a low latency response to high-traffic queries. Which is the most suitable database solution to use to achieve this requirement? A. An Amazon RDS instance in Multi-AZ Deployments configuration B. Amazon DynamoDB C. An Amazon Aurora database with Read Replicas D. Redshift

B. Amazon DynamoDB

Question 64: A popular mobile game uses CloudFront, Lambda, and DynamoDB for its backend services. The player data is persisted on a DynamoDB table and the static assets are distributed by CloudFront. However, there are a lot of complaints that saving and retrieving player information is taking a lot of time. To improve the game's performance, which AWS service can you use to reduce DynamoDB response times from milliseconds to microseconds? A. DynamoDB Auto Scaling B. Amazon DynamoDB Accelerator (DAX) C. Amazon ElastiCache D. AWS Device Farm

B. Amazon DynamoDB Accelerator (DAX)

Question 61: A multi-tiered application hosted in your on-premises data center is scheduled to be migrated to AWS. The application has a message broker service which uses industry standard messaging APIs and protocols that must be migrated as well, without rewriting the messaging code in your application. Which of the following is the most suitable service that you should use to move your messaging service to AWS? A. Amazon SWF B. Amazon MQ C. Amazon SQS D. Amazon SNS

B. Amazon MQ

Question 44: Skipped An application consists of multiple EC2 instances in private subnets in different availability zones. The application uses a single NAT Gateway for downloading software patches from the Internet to the instances. There is a requirement to protect the application from a single point of failure when the NAT Gateway encounters a failure or if its availability zone goes down. How should the Solutions Architect redesign the architecture to be more highly available and cost-effective A. Create three NAT Gateways in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone. B. Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone C. Create a NAT Gateway in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone. D. Create two NAT Gateways in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone.

B. Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone

Question 45: Skipped A car dealership website hosted in Amazon EC2 stores car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwarded to a distributed processing system. Which of the following options can satisfy the given requirement? A. Create an RDS event subscription and send the notifications to Amazon SNS. Configure the SNS topic to fan out the event notifications to multiple Amazon SQS queues. Process the data using Lambda functions. B. Create a native function or a stored procedure that invokes a Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume. C. Create an RDS event subscription and send the notifications to Amazon SQS. Configure the SQS queues to fan out the event notifications to multiple Amazon SNS topics. Process the data using Lambda functions. D. Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fan out the event notifications to multiple Amazon SQS queues to update the processing system.

B. Create a native function or a stored procedure that invokes a Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume.

Question 54: A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for their application. One of the developers was instructed to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments. Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security. A. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information. B. Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information. C. AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead. D. There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service.

B. Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information.

Question 34: A company is using Amazon S3 to store frequently accessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Amazon SQS queue. A solutions architect needs to create a solution that will notify the development and operations team about the created or deleted objects. Which of the following would satisfy this requirement? A. Set up another Amazon SQS queue for the other team. Grant Amazon S3 permission to send a notification to the second SQS queue. B. Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic. C. Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3 permission to send the notification to the second SNS topic. D. Set up an Amazon SNS topic and configure two Amazon SQS queues to poll the SNS topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic.

B. Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic.

Question 38: A company has a cloud architecture that is composed of Linux and Windows EC2 instances that process high volumes of financial data 24 hours a day, 7 days a week. To ensure high availability of the systems, the Solutions Architect needs to create a solution that allows them to monitor the memory and disk utilization metrics of all the instances. Which of the following is the most suitable monitoring solution to implement? A. Use the default CloudWatch configuration to EC2 instances where the memory and disk utilization metrics are already available. Install the AWS Systems Manager (SSM) Agent to all the EC2 instances. B. Install the CloudWatch agent to all the EC2 instances that gathers the memory and disk utilization data. View the custom metrics in the Amazon CloudWatch console. C. Enable the Enhanced Monitoring option in EC2 and install CloudWatch agent to all the EC2 instances to be able to view the memory and disk utilization in the CloudWatch dashboard. D. Use Amazon Inspector and install the Inspector agent to all EC2 instances.

B. Install the CloudWatch agent to all the EC2 instances that gathers the memory and disk utilization data. View the custom metrics in the Amazon CloudWatch console.

Question 58: A Solutions Architect needs to make sure that the On-Demand EC2 instance can only be accessed from this IP address (110.238.98.71) via an SSH connection. Which configuration below will satisfy this requirement? A. Security Group Inbound Rule: Protocol - UDP, Port Range - 22, Source 110.238.98.71/0 B. Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 110.238.98.71/32 C. Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 110.238.98.71/0 D. Security Group Inbound Rule: Protocol - UDP, Port Range - 22, Source 110.238.98.71/32

B. Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 110.238.98.71/32

Question 59: A suite of web applications is hosted in an Auto Scaling group of EC2 instances across three Availability Zones and is configured with default settings. There is an Application Load Balancer that forwards the request to the respective target group on the URL path. The scale-in policy has been triggered due to the low number of incoming traffic to the application. Which EC2 instance will be the first one to be terminated by your Auto Scaling group? A. The instance will be randomly selected by the Auto Scaling group B. The EC2 instance launched from the oldest launch configuration C. The EC2 instance which has been running for the longest time D. The EC2 instance which has the least number of user sessions

B. The EC2 instance launched from the oldest launch configuration

Question 43: A Solutions Architect identified a series of DDoS attacks while monitoring the VPC. The Architect needs to fortify the current cloud infrastructure to protect the data of the clients. Which of the following is the most suitable solution to mitigate these kinds of attacks? A. Using the AWS Firewall Manager, set up a security layer that will prevent SYN floods, UDP reflection attacks, and other DDoS attacks. B. Use AWS Shield Advanced to detect and mitigate DDoS attacks. C. A combination of Security Groups and Network Access Control Lists to only allow authorized traffic to access your VPC. D. Set up a web application firewall using AWS WAF to filter, monitor, and block HTTP traffic.

B. Use AWS Shield Advanced to detect and mitigate DDoS attacks

Question 50: An AI-powered Forex trading application consumes thousands of data sets to train its machine learning model. The application's workload requires a high-performance, parallel hot storage to process the training datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit. Which of the following Amazon storage services should the developer use? A. Use Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively. B. Use Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively. C. Use Amazon Elastic File System and Amazon S3 for hot and cold storage respectively. D. Use Amazon FSx For Lustre and Amazon EBS Provisioned IOPS SSD (io1) volumes for hot and cold storage respectively. Explanation

B. Use Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively.

A company needs to design an online analytics application that uses Redshift Cluster for its data warehouse. Which of the following services allows them to monitor all API calls in Redshift instance and can also provide secured data for auditing and compliance purposes? A) Amazon CloudWatch B) Amazon Redshift Spectrum C) AWS CloudTrail D) AWS X-Ray

C) AWS CloudTrail

Question 21: Skipped A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement? A) Create a file system using Amazon EFS and join it to an Active Directory domain. B) Create a Network File System (NFS) file share using AWS Storage Gateway. C) Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. D) Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.

C) Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.

A company needs to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. The architecture must be highly available and fault-tolerant as it is processing mission-critical workloads. As the Solutions Architect of the company, what should you do to meet the above requirement? A) Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Use 2 Availability Zones and deploy 1 instance for each AZ. B) Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Deploy 4 instances in Availability Zone A. C) Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B. D) Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 4. Deploy 2 instances in Availability Zone A and 2 instances in Availability Zone B.

C) Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B.

An online cryptocurrency exchange platform is hosted in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor how the different processes or threads on a DB instance use the CPU, including the percentage of the CPU bandwidth and total memory consumed by each process. Which of the following is the most suitable solution to properly monitor your database? A) Use Amazon CloudWatch to monitor the CPU Utilization of your database. D) Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance, and then set up a custom CloudWatch dashboard to view the metrics. C) Enable Enhanced Monitoring in RDS. D) Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance.

C) Enable Enhanced Monitoring in RDS.

A company collects atmospheric data such as temperature, air pressure, and humidity from different countries. Each site location is equipped with various weather instruments and a high-speed Internet connection. The average collected data in each location is around 500 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. As the Solutions Architect, you need to aggregate all the data in the fastest way. Which of the following options can satisfy the given requirement? A) Set up a Site-to-Site VPN connection. B) Upload the data to the closest S3 bucket. Set up a cross-region replication and copy the objects to the destination bucket. C) Enable Transfer Acceleration in the destination bucket and upload the collected data using Multipart Upload. D) Use AWS Snowball Edge to transfer large amounts of data.

C) Enable Transfer Acceleration in the destination bucket and upload the collected data using Multipart Upload.

An application that records weather data every minute is deployed in a fleet of Spot EC2 instances and uses a MySQL RDS database instance. Currently, there is only one RDS instance running in one Availability Zone. You plan to improve the database to ensure high availability by synchronous data replication to another RDS instance. Which of the following performs synchronous data replication in RDS? A) DynamoDB Read Replica B) CloudFront running as a Multi-AZ deployment C) RDS DB instance running as a Multi-AZ deployment D) RDS Read Replica

C) RDS DB instance running as a Multi-AZ deployment

A company conducted a surprise IT audit on all of the AWS resources being used in the production environment. During the audit activities, it was noted that you are using a combination of Standard and Convertible Reserved EC2 instances in your applications. Which of the following are the characteristics and benefits of using these two types of Reserved EC2 instances? (Select TWO.) A) It can enable you to reserve capacity for your Amazon EC2 instances in multiple Availability Zones and multiple AWS Regions for any duration. B) Unused Convertible Reserved Instances can later be sold at the Reserved Instance Marketplace. C) Unused Standard Reserved Instances can later be sold at the Reserved Instance Marketplace. D) It runs in a VPC on hardware that's dedicated to a single customer. E) Convertible Reserved Instances allow you to exchange for another convertible reserved instance of a different instance family.

C) Unused Standard Reserved Instances can later be sold at the Reserved Instance Marketplace. E) Convertible Reserved Instances allow you to exchange for another convertible reserved instance of a different instance family.

A company requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. The solution should also be able to audit the key usage independently of AWS CloudTrail. Which of the following options will meet this requirement? A) Use AWS Key Management Service to create AWS-owned CMKs and store the non-extractable key material in AWS CloudHSM. B) Use AWS Key Management Service to create AWS-managed CMKs and store the non-extractable key material in AWS CloudHSM. C) Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM. D) Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in Amazon S3.

C) Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM.

Question 35: A company is designing a banking portal that uses Amazon ElastiCache for Redis as its distributed session management component. Since the other Cloud Engineers in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands. As the Solutions Architect, which of the following should you do to meet the above requirement? A. Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster. B. Set up a Redis replication group and enable the AtRestEncryptionEnabled parameter. C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled. D. Enable the in-transit encryption for Redis replication groups.

C. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled.

Question 62: A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to create a solution that will allow users around the globe to access the data using an API. What should the Solutions Architect do meet the above requirement? A. Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffic in seconds. B. Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds. C. Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds. D. Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffic in seconds.

C. Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds.

Question 47: A telecommunications company is planning to give AWS Console access to developers. Company policy mandates the use of identity federation and role-based access control. Currently, the roles are already assigned using groups in the corporate Active Directory.In this scenario, what combination of the following services can provide developers access to the AWS console? (Select TWO.) A. AWS Directory Service Simple AD B. Lambda C. IAM Roles D. AWS Directory Service AD Connector E. IAM Groups

C. IAM Roles D. AWS Directory Service AD Connector

Question 46: An organization needs to provision a new Amazon EC2 instance with a persistent block storage volume to migrate data from its on-premises network to AWS. The required maximum performance for the storage volume is 64,000 IOPS. In this scenario, which of the following can be used to fulfill this requirement? A. Launch any type of Amazon EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 IOPS. B. Directly attach multiple Instance Store volumes in an EC2 instance to deliver maximum IOPS performance. C. Launch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 IOPS. D. Launch an Amazon EFS file system and mount it to a Nitro-based Amazon EC2 instance and set the performance mode to Max I/O.

C. Launch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 IOPS.

Question 57: The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch. Which of the following is a custom metric in CloudWatch which you have to manually set up? A. Network packets out of an EC2 instance B. Disk Reads activity of an EC2 instance C. Memory Utilization of an EC2 instance D. CPU Utilization of an EC2 instance

C. Memory Utilization of an EC2 instance

Question 51: An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. The Operations team received 5 orders but after a few hours, they saw 20 email notifications in their inbox. Which of the following could be the possible culprit for this issue? A. The web application does not have permission to consume messages in the SQS queue. B. The web application is set for long polling so the messages are being sent twice. C. The web application is not deleting the messages in the SQS queue after it has processed them. D. The web application is set to short polling so some messages are not being picked up

C. The web application is not deleting the messages in the SQS queue after it has processed them.

Question 56: A content management system (CMS) is hosted on a fleet of auto-scaled, On-Demand EC2 instances that use Amazon Aurora as its database. Currently, the system stores the file documents that the users upload in one of the attached EBS Volumes. Your manager noticed that the system performance is quite slow and he has instructed you to improve the architecture of the system. In this scenario, what will you do to implement a scalable, high-available POSIX-compliant shared file system? A. Create an S3 bucket and use this as the storage for the CMS B. Use ElastiCache C. Use EFS D. Upgrading your existing EBS volumes to Provisioned IOPS SSD Volumes

C. Use EFS

Question 63: A web application is using CloudFront to distribute their images, videos, and other static contents stored in their S3 bucket to its users around the world. The company has recently introduced a new member-only access to some of its high quality media files. There is a requirement to provide access to multiple private media files only to their paying subscribers without having to change their current URLs. Which of the following is the most suitable solution that you should implement to satisfy this requirement? A. Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member. B. Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members. C. Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them. D. Create a Signed URL with a custom policy which only allows the members to see the private files.

C. Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them.

A company hosted a web application in an Auto Scaling group of EC2 instances. The IT manager is concerned about the over-provisioning of the resources that can cause higher operating costs. A Solutions Architect has been instructed to create a cost-effective solution without affecting the performance of the application. Which dynamic scaling policy should be used to satisfy this requirement? A. Use simple scaling. B. Use suspend and resume scaling. C. Use target tracking scaling. D. Use scheduled scaling.

C. use target tracking scaling.


Set pelajaran terkait

WI Health Insurance Exam - The Numbers

View Set

Chapter 10 - Business Ethics, Social Responsibility, and Environmental Sustainability

View Set

Chapter 05: Cultural Implications for Psychiatric Mental Health Nursing

View Set

exam 1 computer concepts quiz 2 chapter 2 and 3

View Set

Biology Darwin's Evolution Study Guide

View Set

AP US History CH2 - Transplantations and Borderlands

View Set

Ocean Marine Insurance Quiz Review

View Set

PRACTICE AND CLASS ch. 5 (Time value of money)

View Set