This That Big Bop

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A business currently deploys their application to production manually and wishes to transition to a more mature deployment methodology. The firm has commissioned a solutions architect to provide a solution that uses the company's existing Chef tools and expertise. Before the program is delivered to production, it must first be deployed to a staging environment for testing and verification. If faults are identified after a deployment, it must be turned back within 5 minutes. Which AWS service and deployment strategy should be used to satisfy these requirements? A. Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy. B. Use AWS CodePipeline and deploy the application using a rolling update deployment strategy. C. Use AWS CodeBuild and deploy the application using a canary deployment strategy. D. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.

A

A business is in the process of reworking an existing web service that gives read-write access to structured data. The service must be able to react quickly to brief but large surges in system demand. Across many AWS Regions, the service must be fault resilient. What steps should be done to ensure compliance with these requirements? A. Store the data in Amazon DocumentDB. Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda. Assign the companyגTM€s domain as an alternate domain for the distribution, and configure Amazon Route 53 with an alias to the CloudFront distribution. B. Store the data in replicated Amazon S3 buckets in two Regions. Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region. Assign the companyגTM€s domain as an alternate domain for both distributions, and configure Amazon Route 53 with a failover routing policy between them. C. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record in the companyגTM€s domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs. D. Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. Configure the instances to download the web service code in the user data. In Amazon Route 53, configure an alias record for the companyגTM€s domain and a multi-value routing policy.

A

A business operates a big customer service contact center and use a proprietary application to store and process call recordings. Approximately 2% of call recordings are transcribed for quality assurance reasons by an offshore team. Transcription of these recordings might take up to 72 hours. After 90 days, the recordings are archived to an offsite location through an NFS share. The organization processes phone records and manages the transcription queue using Linux servers. Additionally, a web application is available for quality assurance personnel to analyze and grade call recordings. The corporation intends to transfer the system to AWS in order to cut storage costs and transcription time. Which course of action should be adopted to accomplish the company's goals? A. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Transcribe. Use Amazon S3, Amazon API Gateway, and Lambda to host the review and scoring application. B. Upload the call recordings to Amazon S3 from the call center. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use an AWS Lambda trigger to transcribe the call recordings with Amazon Mechanical Turk. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application. C. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer to host the review and scoring application. Upload the call recordings to this application from the call center and store them on an Amazon EFS mount point. Use AWS Backup to archive the call recordings after 90 days. Transcribe the call recordings with Amazon Transcribe. D. Upload the call recordings to Amazon S3 from the call center and put the object key in an Amazon SQS queue. Set up an S3 lifecycle policy to move the call recordings to Amazon S3 Glacier after 90 days. Use Amazon EC2 instances in an Auto Scaling group to send the recordings to Amazon Mechanical Turk for transcription. Use the number of objects in the queue as the scaling metric. Use Amazon S3, Amazon API Gateway, and AWS Lambda to host the review and scoring application.

A

A business use an on-premises software-as-a-service (SaaS) solution that daily ingests many files. To simplify file transfers, the firm offers various public SFTP endpoints to its clients. The clients add the IP addresses of the SFTP endpoints to their firewall's outgoing traffic allow list. Changes to the IP addresses of the SFTP endpoints are not authorized. The firm want to transition the SaaS solution to AWS and reduce the file transfer service's operating expenses. Which solution satisfies these criteria? A. Register the customer-owned block of IP addresses in the companyגTM€s AWS account. Create Elastic IP addresses from the address pool and assign them to an AWS Transfer for SFTP endpoint. Use AWS Transfer to store the files in Amazon S3. B. Add a subnet containing the customer-owned block of IP addresses to a VPC. Create Elastic IP addresses from the address pool and assign them to an Application Load Balancer (ALB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the ALB. Store the files in attached Amazon Elastic Block Store (Amazon EBS) volumes. C. Register the customer-owned block of IP addresses with Amazon Route 53. Create alias records in Route 53 that point to a Network Load Balancer (NLB). Launch EC2 instances hosting FTP services in an Auto Scaling group behind the NLB. Store the files in Amazon S3. D. Register the customer-owned block of IP addresses in the companyגTM€s AWS account. Create Elastic IP addresses from the address pool and assign them to an Amazon S3 VPC endpoint. Enable SFTP support on the S3 bucket.

A

A consortium of academic institutes and hospitals is collaborating to investigate two petabytes of genetic data. The data is stored and updated frequently by the institution that owns it in an Amazon S3 bucket. The institution wishes to provide read access to the data to all entities participating in the cooperation. Each partner is particularly cost-conscious, and the institution that owns the S3 bucket is concerned with paying the expenses of requests and data transfers from Amazon S3. Which method enables safe data sharing without requiring the owner of the bucket to bear all expenses associated with S3 queries and data transfers? A. Ensure that all organizations in the partnership have AWS accounts. In the account with the S3 bucket, create a cross-account role for each account in the partnership that allows read access to the data. Have the organizations assume and use that read role when accessing the data. B. Ensure that all organizations in the partnership have AWS accounts. Create a bucket policy on the bucket that owns the data. The policy should allow the accounts in the partnership read access to the bucket. Enable Requester Pays on the bucket. Have the organizations use their AWS credentials when accessing the data. C. Ensure that all organizations in the partnership have AWS accounts. Configure buckets in each of the accounts with a bucket policy that allows the institute that owns the data the ability to write to the bucket. Periodically sync the data from the instituteגTM€s account to the other organizations. Have the organizations use their AWS credentials when accessing the data using their accounts. D. Ensure that all organizations in the partnership have AWS accounts. In the account with the S3 bucket, create a cross-account role for each account in the partnership that allows read access to the data. Enable Requester Pays on the bucket. Have the organizations assume and use that read role when accessing the data.

A

A consumer electronics business with locations in Europe and Asia has 60 TB of software images stored on its European facilities. The business wishes to upload the photographs to an Amazon S3 bucket in the ap-northeast-1 Region. Each day, new software images are developed and must be encrypted during transmission. The organization requires an automated transfer of all current and new software images to Amazon S3 that does not need bespoke programming. What is the next stage in the process of transfer? A. Deploy an AWS DataSync agent and configure a task to transfer the images to the S3 bucket B. Configure Amazon Kinesis Data Firehose to transfer the images using S3 Transfer Acceleration C. Use an AWS Snowball device to transfer the images with the S3 bucket as the target D. Transfer the images over a Site-to-Site VPN connection using the S3 API with multipart upload

A

A corporation that offers artwork auction services has users in North America and Europe. Amazon EC2 instances in the us-east-1 Region are used to host the company's application. Artists submit images of their work from their mobile phones to a centralized Amazon S3 bucket constructed in the us-east-1 Region. Users in Europe are complaining that their picture uploads are performing slowly. How can a solutions architect optimize the picture upload process' performance? A. Redeploy the application to use S3 multipart uploads. B. Create an Amazon CloudFront distribution and point to the application as a custom origin. C. Configure the buckets to use S3 Transfer Acceleration. D. Create an Auto Scaling group for the EC2 instances and create a scaling policy.

A

A financial services business gets an automated data feed from its credit card servicing partner on a regular basis. Each 15 minutes, around 5,000 records are transferred in plaintext over HTTPS straight to an Amazon S3 bucket with server-side encryption. This feed provides personally identifiable information (PII) associated with credit card primary account numbers (PANs). Before transmitting the data to another S3 bucket for extra internal processing, the organization has to automatically mask the PAN. Additionally, the business must delete and combine particular information before converting the data to JSON format. Additionally, additional streams are a possibility in the future, thus any design must be readily scalable. Which solutions will satisfy these criteria? A. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing. B. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance. C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing. D. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.

A

A firm is developing a data processing platform that will analyze and store the results of a huge number of files stored in an Amazon S3 bucket. These files will be processed just once and are required to be maintained for a period of one year. The organization want to guarantee that both the source files and the generated data are highly accessible across several AWS Regions. Which solution will satisfy these criteria? A. Create an S3 CreateObject event notification to copy the file to Amazon Elastic Block Store (Amazon EBS). Use AWS DataSync to sync the files between EBS volumes in multiple Regions. Use an Amazon EC2 Auto Scaling group in multiple Regions to attach the EBS volumes. Process the files and store the results in a DynamoDB global table in multiple Regions. Configure the S3 bucket with an S3 Lifecycle policy to move the files to S3 Glacier after 1 year. B. Create an S3 CreateObject event notification to copy the file to Amazon Elastic File System (Amazon EFS). Use AWS DataSync to sync the files between EFS volumes in multiple Regions. Use an AWS Lambda function to process the EFS files and store the results in a DynamoDB global table in multiple Regions. Configure the S3 buckets with an S3 Lifecycle policy to move the files to S3 Glacier after 1 year. C. Copy the files to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event notification on the original bucket to push S3 file paths into Amazon EventBridge (Amazon CloudWatch Events). Use an AWS Lambda function to poll EventBridge (CloudWatch Events) to process each file and store the results in a DynamoDB table in each Region. Configure both S3 buckets to use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the files after 1 year. D. Copy the files to an S3 bucket in another Region by using cross-Region replication. Create an S3 CreateObject event notification on the original bucket to execute an AWS Lambda function to process each file and store the results in a DynamoDB global table in multiple Regions. Configure both S3 buckets to use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class and an S3 Lifecycle policy to delete the files after 1 year.

A

A firm may have numerous divisions. Each business unit has its own AWS account and uses it to host a single website. Additionally, the firm has a single logging account. The logging account aggregates the logs from each business unit website into a single Amazon S3 bucket. The S3 bucket policy grants each business unit access to the bucket and mandates that data be encrypted. The firm must use a single AWS Key Management Service (AWS KMS) CMK to encrypt logs submitted to the bucket. Every 365 days, the CMK that secures the data must be cycled. Which plan is the MOST OPTIMAL from an operational standpoint for the organization to follow in order to satisfy these requirements? A. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only. Manually rotate the CMK every 365 days. B. Create a customer managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business unit accounts. Enable automatic rotation of the CMK. C. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account and business unit accounts. Manually rotate the CMK every 365 days. D. Use an AWS managed CMK in the logging account. Update the CMK key policy to provide access to the logging account only. Enable automatic rotation of the CMK.

A

A firm that records medical equipment in hospitals want to transfer to the AWS Cloud from its current storage solution. All of the company's products are equipped with sensors that gather location and use data. This sensor data is sent in erratic patterns punctuated by big spikes. The data is kept in a MySQL database that each hospital maintains on-site. The business need a cloud storage solution that scales with consumption. The analytics team at the firm analyzes sensor data to determine utilization by device type and hospital. The team must maintain local analytic tools while retrieving data from the cloud. Additionally, the team must make minimal modifications to current Java applications and SQL queries. How can a solutions architect satisfy these needs while also assuring the security of sensor data? A. Store the data in an Amazon Aurora Serverless database. Serve the data through a Network Load Balancer (NLB). Authenticate users using the NLB with credentials stored in AWS Secrets Manager. B. Store the data in an Amazon S3 bucket. Serve the data through Amazon QuickSight using an IAM user authorized with AWS Identity and Access Management (IAM) with the S3 bucket as the data source. C. Store the data in an Amazon Aurora Serverless database. Serve the data through the Aurora Data API using an IAM user authorized with AWS Identity and Access Management (IAM) and the AWS Secrets Manager ARN. D. Store the data in an Amazon S3 bucket. Serve the data through Amazon Athena using AWS PrivateLink to secure the data in transit.

A

A major corporation recently encountered an unanticipated rise in the pricing of Amazon RDS and Amazon DynamoDB. The company's visibility into AWS Billing and Cost Management delays must be increased. AWS Organizations are connected with a variety of accounts, including several development and production accounts. Although the business lacks a uniform tagging method, requirements mandate that all infrastructure be deployed using AWS CloudFormation with consistent tagging. For all present and prospective DynamoDB tables and RDS distances, management required cost center and project ID numbers. Which solution architecture approach should be used to achieve these requirements? A. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources. B. Use an AWS Config rule to alert the finance team of untagged resources. Create a centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB resources every hour using a cross-account role. C. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict resource creation that do not have the cost center and project ID on the resource. D. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources. Update existing federated roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource.

A

A major organization operates a mission-critical application in a single AWS Region. Multiple Amazon EC2 instances and an Amazon RDS Multi-AZ database are used to power the application. The EC2 instances are distributed across several Availability Zones using an Amazon EC2 Auto Scaling group. A solutions architect is executing the application's disaster recovery (DR) strategy. The solutions architect has designed a trial deployment of a light application in a new Region, dubbed the DR Region. The DR environment is comprised of an Auto Scaling group that consists of a single EC2 instance and a read replica of the RDS database instance. The solutions architect must automate the failover of the main application environment to the DR Region's pilot light environment. Which option satisfies these conditions the most efficiently? A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Add an email subscription to the SNS topic that sends messages to the application owner. Upon notification, instruct a systems operator to sign in to the AWS Management Console and initiate failover operations for the application. B. Create a cron task that runs every 5 minutes by using one of the applicationגTM€s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task notifies a systems operator and attempts to restart the application services. C. Create a cron task that runs every 5 minutes by using one of the applicationגTM€s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task modifies the DR environment by promoting the read replica and by adding EC2 instances to the Auto Scaling group. D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.

A

A new AWS customer says that it has reached the service limitations on multiple accounts on the Basic Support plan. The firm want to avoid this occurrence in the future. What is the MOST EFFECTIVE method for monitoring and controlling all service restrictions associated with a business's accounts? A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using Amazon SNS if the limits are close to exceeding the threshold. B. Reach out to AWS Support to proactively increase the limits across all accounts. That way, the customer avoids creating and managing infrastructure just to raise the service limits. C. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, programmatically increase the limits that are close to exceeding the threshold. D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and use Amazon SNS for notifications if a limit is close to exceeding the threshold. Ensure that the accounts are using the AWS Business Support plan at a minimum.

A

A solutions architect has been given the task of migrating from on-premises to Amazon Redshift a 50 TB Oracle data warehouse including sales data. The last calendar day of the month is when the sales data is updated significantly. The data warehouse gets relatively small daily updates for the rest of the month and is largely utilized for reading and reporting. As a result, the migration process must begin on the first day of the month and conclude prior to the following batch of changes. This gives you roughly 30 days to finish the migration and guarantee that small daily updates are synced with the Amazon Redshift data warehouse. Due to the fact that the migration cannot disrupt routine corporate network activities, the migration has been granted 50 Mbps of capacity for data transmission over the internet. The business want to reduce data moving expenditures to a minimum. Which procedures will enable the solutions architect to complete the move on time? A. Install Oracle database software on an Amazon EC2 instance. Configure VPN connectivity between AWS and the companyגTM€s data center. Configure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC). When the Oracle database on Amazon EC2 finishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift. B. Create an AWS Snowball import job. Export a backup of the Oracle data warehouse. Copy the exported data to the Snowball device. Return the Snowball device to AWS. Create an Amazon RDS for Oracle database and restore the backup file to that RDS instance. Create an AWS DMS task to migrate the data from the RDS for Oracle database to Amazon Redshift. Copy daily incremental backups from Oracle in the data center to the RDS for Oracle database over the internet. Verify the data migration is complete and perform the cut over to Amazon Redshift. C. Install Oracle database software on an Amazon EC2 instance. To minimize the migration time, configure VPN connectivity between AWS and the companyגTM€s data center by provisioning a 1 Gbps AWS Direct Connect connection. Configure the Oracle database running on Amazon EC2 to be a read replica of the data center Oracle database. Start the synchronization process between the companyגTM€s on-premises data center and the Oracle database on Amazon EC2. When the Oracle database on Amazon EC2 is synchronized with the on-premises database, create an AWS DMS ongoing replication task to migrate the data from the Oracle database read replica that is running on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift. D. Create an AWS Snowball import job. Configure a server in the companyגTM€s data center with an extraction agent. Use AWS SCT to manage the extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is complete and perform the cut over to Amazon Redshift.

A

Currently, a company's data is stored on an IBM Db2 database. A web application invokes an API that executes stored procedures on the database to get read-only user information. This data is archival in nature and is updated regularly. When a user signs in, this data must be obtained within three seconds. The saved routines are invoked each time a user signs in. Users check stock prices multiple times a day. Due to Db2 CPU licensing, running this database has become too expensive. Performance objectives are not reached. Db2 timeouts are widespread as a result of long-running requests. Which migration strategy should a solutions architect use to move this system to AWS? A. Rehost the Db2 database in Amazon Fargate. Migrate all the data. Enable caching in Fargate. Refactor the API to use the Fargate Db2 database. Implement Amazon API Gateway and enable API caching. B. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching. C. Create a local cache on the mainframe to store query outputs. Use SFTP to sync to Amazon S3 on a daily basis. Refactor the API to use Amazon EFS. Implement Amazon API Gateway and enable API caching. D. Extract data daily and copy the data to AWS Snowball for storage on Amazon S3. Sync daily. Refactor the API to use the S3 data. Implement Amazon API Gateway and enable API caching.

A

In a single location, an international corporation implemented a multi-tier web application that depends on DynamoDB. They need disaster recovery capacity in a different location with a recovery time objective of two hours and a recovery point objective of twenty-four hours due to regulatory requirements. They should regularly synchronize their data and be able to quickly deploy my web application using CloudFormation. The goal is to make minimal modifications to the current web application, to manage the throughput of the DynamoDB database used for data synchronization, and to synchronize only updated parts. Which design would you prefer if these specifications were met? A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day, create a ג€Lastupdatedג €attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. B. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. C. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. D. Send also each Ante into an SQS queue in me second region; use an auto-scaling group behind the SQS queue to replay the write in the second region.

A

In a single region, an ERP application is distributed across numerous AZs. If a failure occurs, the Recovery Time Objective (RTO) must be less than three hours and the Recovery Point Objective (RPO) must be fewer than fifteen minutes. The consumer becomes aware of data corruption that happened around 1.5 hours ago. In the case of this kind of failure, what disaster recovery technique may be employed to accomplish this RTO and RPO? A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes. B. Use synchronous database master-slave replication between two availability zones. C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes. D. Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.

A

Multiple separate instances of an application must be deployed by a business. The front-end application is web-based. Corporate policy, on the other hand, requires that backends be separated from one another and from the internet, yet accessible through a centralized administrative server. The application setup process should be automated to eliminate the possibility of human error while deploying new instances. Which solution satisfies all objectives while minimizing costs? A. Use an AWS CloudFormation template to create identical IAM roles for each region. Use AWS CloudFormation StackSets to deploy each application instance by using parameters to customize for each instance, and use security groups to isolate each instance while permitting access to the central server. B. Create each instance of the application IAM roles and resources in separate accounts by using AWS CloudFormation StackSets. Include a VPN connection to the VPN gateway of the central administration server. C. Duplicate the application IAM roles and resources in separate accounts by using a single AWS CloudFormation template. Include VPC peering to connect the VPC of each application instance to a central VPC. D. Use the parameters of the AWS CloudFormation template to customize the deployment into separate accounts. Include a NAT gateway to allow communication back to the central administration server.

A

The public-facing WordPress site of a European online newspaper is hosted in a colocated data center in London. A load balancer, two web servers, and one MySQL database server comprise the current WordPress architecture. A solutions architect is assigned with the responsibility of building a solution that meets the following criteria: ✑ Enhance the performance of the website ✑ Scalability and statelessness of the web tier ✑ Enhance the database server's performance in read-intensive conditions ✑ Reduce latency for consumers in Europe and the United States ✑ Develop a new architecture with the objective of achieving 99.9 percent availability Which method satisfies these criteria while improving operational efficiency? A. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the WordPress shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin, and select a price class that includes the US and Europe. B. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and two Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the WordPress shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin, and select a price class that includes the US and Europe. Configure EFS cross- Region replication. C. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the WordPress shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin, and select a price class that includes all global locations. D. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of WordPress Amazon EC2 instances in two AWS Regions and three Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the WordPress shared files to Amazon FSx with cross-Region synchronization. Configure Amazon CloudFront with the ALB as the origin and a price class that includes the US and Europe.

A

What happens if I type the following command? ec2-run ami-e3a5408a -n 20 -g appserver A. Start twenty instances as members of appserver group. B. Creates 20 rules in the security group named appserver C. Terminate twenty instances as members of appserver group. D. Start 20 security groups

A

While troubleshooting a backend application for an IoT system with globally dispersed devices, a Solutions Architect observes that stale data is being transmitted to consumer devices on occasion. Devices often communicate data, yet outdated data is seldom a reason for concern. However, when a device receives stale data after an update, device functions are disturbed. The global system is comprised of numerous identical application stacks that are deployed across various AWS Regions. If a user device leaves its native geographic area, it will always connect to the AWS Region that is the nearest geographically. Through the use of an Amazon DynamoDB global table, the same data is accessible in all supported AWS Regions. What modification should be performed to prevent generating device function disruptions? A. Update the backend to use strongly consistent reads. Update the devices to always write to and read from their home AWS Region. B. Enable strong consistency globally on a DynamoDB global table. Update the backend to use strongly consistent reads. C. Switch the backend data store to Amazon Aurora MySQL with cross-region replicas. Update the backend to always write to the master endpoint. D. Select one AWS Region as a master and perform all writes in that AWS Region only. Update the backend to use strongly consistent reads.

A

Your business operates a social networking platform that is accessible to people in many countries. You have been tasked with designing a highly available application that makes use of many regions to provide the most recently requested content and latency-sensitive areas of the wet) site. The application's most latency-sensitive component includes reading user preferences in order to allow website personalisation and ad choices. Apart from operating your program across several areas, which alternative best meets the needs of your application? A. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table. B. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region. C. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 CloudFront and Route53 latencybased routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates. D. Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each ElastiCache cluster.

A

A business wishes to host both a Wordpress blog and a Joomla content management system on a same VPC server. The business want to use Route 53 to generate distinct domains for each application. Each of these two programs may have around 10 instances inside the business. During the instance's launch, the organization specified two distinct network interfaces (main + secondary ENIs) with their unique Elastic IPs. The advice was to utilize a public IP from AWS rather than an elastic IP, since the account's elastic IP allotment is limited per region. Which course of action would you suggest to the organization? A. Only Elastic IP can be used by requesting limit increase, since AWS does not assign a public IP to an instance with multiple ENIs. B. AWS VPC does not attach a public IP to an ENI; so the only way is to use an Elastic IP. C. I agree with the suggestion but will prefer that the organization should use separate subnets with each ENI for different public IPs. D. I agree with the suggestion and it is recommended to use a public IP from AWS since the organization is going to use DNS with Route 53.

A A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. An Elastic Network Interface (ENI) is a virtual network interface that the user can attach to an instance in a VPC. The user can attach up to two ENIs with a single instance. However, AWS cannot assign a public IP when there are two ENIs attached to a single instance. It is recommended to assign an elastic IP in this scenario. If the organization wants more than 5 EIPs they can request AWS to increase the number. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

A business intends to run a web application on an AWS VPC. Due to legislative constraints, the organization does not want to host a database in the public cloud. How should the organization structure itself in this scenario? A. The organization should plan the app server on the public subnet and database in the organization's data center and connect them with the VPN gateway. B. The organization should plan the app server on the public subnet and use RDS with the private subnet for a secure data operation. C. The organization should use the public subnet for the app server and use RDS with a storage gateway to access as well as sync the data securely from the local data center. D. The organization should plan the app server on the public subnet and database in a private subnet so it will not be in the public cloud.

A A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. The user can create subnets as per the requirement within a VPC. If the user wants to connect VPC from his own data centre, he can setup a public and VPN only subnet which uses hardware VPN access to connect with his data centre. When the user has configured this setup with Wizard, it will create a virtual private gateway to route all the traffic of the VPN subnet. If the virtual private gateway is attached with VPC and the user deletes the VPC from the console it will first automatically detach the gateway and only then delete the VPC. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

The IAM policy of an organization (account ID 123412341234) has been set to enable the user to alter his credentials. What action would the user be able to take as a result of the following statement? { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "iam:AddUserToGroup", "iam:RemoveUserFromGroup", "iam:GetGroup" ], "Resource": "arn:aws:iam:: 123412341234:group/TestingGroup" } ] A. Allow the IAM user to update the membership of the group called TestingGroup B. The IAM policy will throw an error due to an invalid resource name C. The IAM policy will allow the user to subscribe to any IAM group D. Allow the IAM user to delete the TestingGroup

A AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the organization (account ID 123412341234) wants their users to manage their subscription to the groups, they should create a relevant policy for that. The below mentioned policy allows the respective IAM user to update the membership of the group called MarketingGroup. { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "iam:AddUserToGroup", "iam:RemoveUserFromGroup", "iam:GetGroup" ], "Resource": "arn:aws:iam:: 123412341234:group/ TestingGroup " }] Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Credentials-Permissions-examples.html#creds-policies-credentials

An organization has an application that can schedule the start and shutdown of an EC2 instance. The organization need the instance's MAC address in order to register it with its software. EC2-CLASSIC is used to start the instance. How can an organization ensure that the MAC registration is updated on a per-instance basis? A. The organization should write a boot strapping script which will get the MAC address from the instance metadata and use that script to register with the application. B. The organization should provide a MAC address as a part of the user data. Thus, whenever the instance is booted the script assigns the fixed MAC address to that instance. C. The instance MAC address never changes. Thus, it is not required to register the MAC address every time. D. AWS never provides a MAC address to an instance; instead the instance ID is used for identifying the instance for any software registration.

A AWS provides an on demand, scalable infrastructure. AWS EC2 allows the user to launch On- Demand instances. AWS does not provide a fixed MAC address to the instances launched in EC2-CLASSIC. If the instance is launched as a part of EC2-VPC, it can have an ENI which can have a fixed MAC. However, with EC2- CLASSIC, every time the instance is started or stopped it will have a new MAC address. To get this MAC, the organization can run a script on boot which can fetch the instance metadata and get the MAC address from that instance metadata. Once the MAC is received, the organization can register that MAC with the software. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html

True or False: In Amazon ElastiCache, you may setup cache clusters that are part of a VPC using Cache Security Groups. A. FALSE B. TRUE C. True, this is applicable only to cache clusters that are running in an Amazon VPC environment. D. True, but only when you configure the cache clusters using the Cache Security Groups from the console navigation pane.

A Amazon ElastiCache cache security groups are only applicable to cache clusters that are not running in an Amazon Virtual Private Cloud environment (VPC). If you are running in an Amazon Virtual Private Cloud, Cache Security Groups is not available in the console navigation pane. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheSecurityGroup.html

A 3-Ber e-commerce web application is now installed on-premises and will be transferred to AWS in order to increase scalability and flexibility. The web layer presently uses a network distributed file system to transfer read-only data. The app server layer makes use of an IP multicast-based clustering technique for discovery and shared session information. The database layer scales by using shared-storage clustering and many read slaves. Weekly off-site tape backups of all servers and the distributed file system directory are performed. Which AWS storage and database architecture best satisfies the application's requirements? A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots. B. Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi- AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots. C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots. D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

A Amazon Glacier doesnגTM€t suit all storage situations. Listed following are a few storage needs for which you should consider other AWS storage options instead of Amazon Glacier. Data that must be updated very frequently might be better served by a storage solution with lower read/write latencies, such as Amazon EBS, Amazon RDS, Amazon DynamoDB, or relational databases running on EC2. Reference: https://d0.awsstatic.com/whitepapers/Storage/AWS%20Storage%20Services%20Whitepaper-v9.pdf

A user has started an EC2 instance optimized for EBS. Which of the following statements is correct? A. It provides additional dedicated capacity for EBS IO B. The attached EBS will have greater storage capacity C. The user will have a PIOPS based EBS volume D. It will be launched on dedicated hardware in VPC

A An Amazon EBS-optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for the Amazon EBS I/O. This optimization provides the best performance for the user's Amazon EBS volumes by minimizing contention between the Amazon EBS I/O and other traffic from the user's instance. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html

An firm has created an application that enables a more intelligent purchasing experience. They need to demonstrate the application to a variety of stakeholders who may not have access to the on-premises version, so they opt to host a demo version on AWS. As a result, they will need a fixed elastic IP address that is automatically assigned to the instance at start. Which of the following options does not assist in automatically assigning the elastic IP address in this scenario? A. Write a script which will fetch the instance metadata on system boot and assign the public IP using that metadata. B. Provide an elastic IP in the user data and setup a bootstrapping script which will fetch that elastic IP and assign it to the instance. C. Create a controlling application which launches the instance and assigns the elastic IP based on the parameter provided when that instance is booted. D. Launch instance with VPC and assign an elastic IP to the primary network interface.

A EC2 allows the user to launch On-Demand instances. If the organization is using an application temporarily only for demo purposes the best way to assign an elastic IP would be: Launch an instance with a VPC and assign an EIP to the primary network interface. This way on every instance start it will have the same IP Create a bootstrapping script and provide it some metadata, such as user data which can be used to assign an EIP Create a controller instance which can schedule the start and stop of the instance and provide an EIP as a parameter so that the controller instance can check the instance boot and assign an EIP The instance metadata gives the current instance data, such as the public/private IP. It can be of no use for assigning an EIP. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html

Which of the following statements is NOT true regarding a stack that has been developed in an AWS OpsWorks Virtual Private Cloud (VPC)? A. Subnets whose instances cannot communicate with the Internet are referred to as public subnets. B. Subnets whose instances can communicate only with other instances in the VPC and cannot communicate directly with the Internet are referred to as private subnets. C. All instances in the stack should have access to any package repositories that your operating system depends on, such as the Amazon Linux or Ubuntu Linux repositories. D. Your app and custom cookbook repositories should be accessible for all instances in the stack.

A In AWS OpsWorks, you can control user access to a stack's instances by creating it in a virtual private cloud (VPC). For example, you might not want users to have direct access to your stack's app servers or databases and instead require that all public traffic be channeled through an Elastic Load Balancer. A VPC consists of one or more subnets, each of which contains one or more instances. Each subnet has an associated routing table that directs outbound traffic based on its destination IP address. Instances within a VPC can generally communicate with each other, regardless of their subnet. Subnets whose instances can communicate with the Internet are referred to as public subnets. Subnets whose instances can communicate only with other instances in the VPC and cannot communicate directly with the Internet are referred to as private subnets. AWS OpsWorks requires the VPC to be configured so that every instance in the stack, including instances in private subnets, has access to the following endpoints: The AWS OpsWorks service, https://opsworks-instance-service.us-east-1.amazonaws.com . Amazon S3 - The package repositories for Amazon Linux or Ubuntu 12.04 LTS, depending on which operating system you specify. Your app and custom cookbook repositories. Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html#workingstacks-vpc-basics

How may several computing resources in AWS Data Pipeline be utilized in the same pipeline? A. You can use multiple compute resources on the same pipeline by defining multiple cluster objects in your definition file and associating the cluster to use for each activity via its runs On field. B. You can use multiple compute resources on the same pipeline by defining multiple cluster definition files C. You can use multiple compute resources on the same pipeline by defining multiple clusters for your activity. D. You cannot use multiple compute resources on the same pipeline.

A Multiple compute resources can be used on the same pipeline in AWS Data Pipeline by defining multiple cluster objects in your definition file and associating the cluster to use for each activity via its runs On field, which allows pipelines to combine AWS and on premise resources, or to use a mix of instance types for their activities. Reference: https://aws.amazon.com/datapipeline/faqs/

A travel firm developed a web application that sends email alerts to consumers through Amazon Simple Email Service (Amazon SES). The organization should allow logging in order to assist in troubleshooting email delivery difficulties. Additionally, the organization need the capacity to do searches by receiver, topic, and time of transmission. Which actions should a solutions architect do in combination to achieve these requirements? (Select two.) A. Create an Amazon SES configuration set with Amazon Kinesis Data Firehose as the destination. Choose to send logs to an Amazon S3 bucket. B. Enable AWS CloudTrail logging. Specify an Amazon S3 bucket as the destination for the logs. C. Use Amazon Athena to query the logs in the Amazon S3 bucket for recipient, subject, and time sent. D. Create an Amazon CloudWatch log group. Configure Amazon SES to send logs to the log group. E. Use Amazon Athena to query the logs in Amazon CloudWatch for recipient, subject, and time sent.

A Reference - https://docs.aws.amazon.com/ses/latest/DeveloperGuide/ses-dg.pdf

A vehicle rental firm created a serverless REST API to provide data to its mobile application. The application is comprised of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions, and an Amazon Aurora MySQL Serverless Database cluster. The corporation recently exposed its API to third-party mobile applications. As a consequence, the number of queries increased significantly, resulting in periodic database memory issues. Clients are making several HTTP GET requests for the same query in a short period of time, according to an analysis of the API activity. The majority of traffic occurs during business hours, with increases during holidays and other special events. The company's capacity to accommodate increased use must be enhanced while reducing the solution's cost rise. Which approach satisfies these criteria? A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage. B. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache. C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount of available memory. D. Enable throttling in the API Gateway production stage. Set the rate and burst values to limit the incoming calls.

A Reference: https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-4/

A financial institution must open a new AWS account specifically for the purpose of developing a new digital wallet application. The firm manages its accounts using AWS Organizations. A solutions architect uses the master account's IAM user Support1 to establish a new member account with the email address [email protected]. What should the solutions architect do to populate the new member account with IAM users? A. Sign in to the AWS Management Console with AWS account root user credentials by using the 64-character password from the initial AWS Organizations email sent to [email protected]. Set up the IAM users as required. B. From the master account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member account. Set up the IAM users as required. C. Go to the AWS Management Console sign-in page. Choose ג€Sign in using root account credentials.ג €Sign in by using the email address [email protected] and the master accountגTM€s root password. Set up the IAM users as required. D. Go to the AWS Management Console sign-in page. Sign in by using the account ID of the new member account and the Support1 IAM credentials. Set up the IAM users as required.

A Reference: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html

A business offers its consumers with AWS solutions using AWS CloudFormation templates. Users may access the templates via their accounts to request various solutions. The customers wish to optimize their solution deployment approach while keeping the flexibility to do the following: ✑ Customize a solution for their individual deployments by adding their own features. ✑ Conduct unit tests on their modifications. ✑ Configure features for their deployments by turning them on and off. ✑ Automatically update in response to code changes. ✑ Conduct security scans on their deployments. Which tactics should the Solution Architect use to ensure that the criteria are met? A. Allow users to download solution code as Docker images. Use AWS CodeBuild and AWS CodePipeline for the CI/CD pipeline. Use Docker images for different solution features and the AWS CLI to turn features on and off. Use AWS CodeDeploy to run unit tests and security scans, and for deploying and updating a solution with changes. B. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use AWS Amplify plugins for different solution features and user prompts to turn features on and off. Use AWS Lambda to run unit tests and security scans, and AWS CodeBuild for deploying and updating a solution with changes. C. Allow users to download solution code artifacts in their Amazon S3 buckets. Use Amazon S3 and AWS CodePipeline for the CI/CD pipelines. Use CloudFormation StackSets for different solution features and to turn features on and off. Use AWS Lambda to run unit tests and security scans, and CloudFormation for deploying and updating a solution with changes. D. Allow users to download solution code artifacts. Use AWS CodeCommit and AWS CodePipeline for the CI/CD pipeline. Use the AWS Cloud Development Kit constructs for different solution features, and use the manifest file to turn features on and off. Use AWS CodeBuild to run unit tests and security scans, and for deploying and updating a solution with changes.

A Reference: https://www.slideshare.net/AmazonWebServices/cicd-for-containers-a-way-forward-for-your-devops-pipeline

On the EC2 instances, an organization has hosted an application. Multiple users will connect to the instance for the purpose of setting up and configuring the application. The company intends to apply a number of industry-recognized security best practices. Which of the following points will not assist the company in achieving a more secure environment? A. Allow only IAM users to connect with the EC2 instances with their own secret access key. B. Create a procedure to revoke the access rights of the individual user when they are not required to connect to EC2 instance anymore for the purpose of application configuration. C. Apply the latest patch of OS and always keep it updated. D. Disable the password based login for all the users. All the users should use their own keys to connect with the instance securely.

A Since AWS is a public cloud any application hosted on EC2 is prone to hacker attacks. It becomes extremely important for a user to setup a proper security mechanism on the EC2 instances. A few of the security measures are listed below: Always keep the OS updated with the latest patch Always create separate users with in OS if they need to connect with the EC2 instances, create their keys and disable their password Create a procedure using which the admin can revoke the access of the user when the business work on the EC2 instance is completed. Lock down unnecessary ports. Audit any proprietary applications that the user may be running on the EC2 instance Provide temporary escalated privileges, such as sudo for users who need to perform occasional privileged tasks The IAM is useful when users are required to work with AWS resources and actions, such as launching an instance. It is not useful to connect (RDP / SSH) with an instance. Reference: http://aws.amazon.com/articles/1233/

An enterprise is establishing a virtual private cloud (VPC) for the purpose of hosting their application. The organization has established two private subnets inside the same AZ and one in a different zone. The business wishes to create a high-availability system using the internal ELB. Which of the following things is true in this circumstance about an internal ELB? A. ELB can support only one subnet in each availability zone. B. ELB does not allow subnet selection; instead it will automatically select all the available subnets of the VPC. C. If the user is creating an internal ELB, he should use only private subnets. D. ELB can support all the subnets irrespective of their zones.

A The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For internal servers, such as App servers the organization can create an internal load balancer in their VPC and then place back-end application instances behind the internal load balancer. The internal load balancer will route requests to the back-end application instances, which are also using private IP addresses and only accept requests from the internal load balancer. The Internal ELB supports only one subnet in each AZ and asks the user to select a subnet while configuring internal ELB. Reference: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/USVPC_creating_basic_lb.html

What is a conceivable reason you might need to update SAML token-generated claims? A. The NameIdentifier claim cannot be the same as the username stored in AD. B. Authentication fails consistently. C. The NameIdentifier claim cannot be the same as the claim URI. D. The NameIdentifier claim must be the same as the username stored in AD.

A The two reasons you would need to edit claims issued in a SAML token are: The NameIdentifier claim cannot be the same as the username stored in AD, and The app requires a different set of claim URIs. Reference: https://azure.microsoft.com/en-us/documentation/articles/active-directory-saml-claims-customization/

A user has paused the Auto Scaling group's scaling operation. A scaling exercise to increase the number of instances was already underway. How will the ban impact that activity? A. No effect. The scaling activity continues B. Pauses the instance launch and launches it only after Auto Scaling is resumed C. Terminates the instance D. Stops the instance temporary

A The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency situations. To perform this, the user can suspend one or more scaling processes at any time. When this process is suspended, Auto Scaling creates no new scaling activities for that group. Scaling activities that were already in progress before the group was suspended continue until completed. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html

A user is attempting to utilize the PutMetricData APIs to submit custom metrics to CloudWatch. Which of the following things should the user consider before delivering data to CloudWatch? A. The size of a request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests B. The size of a request is limited to 16KB for HTTP GET requests and 80KB for HTTP POST requests C. The size of a request is limited to 128KB for HTTP GET requests and 64KB for HTTP POST requests D. The size of a request is limited to 40KB for HTTP GET requests and 8KB for HTTP POST requests

A With AWS CloudWatch, the user can publish data points for a metric that share not only the same time stamp, but also the same namespace and dimensions. CloudWatch can accept multiple data points in the same PutMetricData call with the same time stamp. The only thing that the user needs to take care of is that the size of a PutMetricData request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests. Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html

In the US-East-1a region, a user has established two EBS-backed EC2 instances. The user wishes to modify the zone associated with one of the instances. How does the user modify it? A. It is not possible to change the zone of an instance after it is launched B. From the AWS EC2 console, select the Actions - > Change zones and specify the new zone C. The zone can only be modified using the AWS CLI D. Stop one of the instances and change the availability zone

A With AWS EC2, when a user is launching an instance he can select the availability zone (AZ) at the time of launch. If the zone is not selected, AWS selects it on behalf of the user. Once the instance is launched, the user cannot change the zone of that instance unless he creates an AMI of that instance and launches a new instance from it. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

A business is deploying its infrastructure using AWS CloudFormation. The organization is worried that deleting a production CloudFormation stack would result in the deletion of critical data housed in Amazon RDS databases or Amazon EBS volumes. How can the organization prevent users from deleting data inadvertently in this manner? A. Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources. B. Configure a stack policy that disallows the deletion of RDS and EBS resources. C. Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an ג€aws:cloudformation:stack-nameג €tag. D. Use AWS Config rules to prevent deleting RDS and EBS resources.

A With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their stacks. Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

A business must be able to persist tiny data records (up to 1 KiB) cost-effectively for up to 30 days. The data is accessed seldom. A 5-minute delay is allowed while reading the data. Which of the following options accomplishes this objective? (Select two.) A. Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data to Amazon Glacier immediately after write. Use expedited retrievals when reading the data. B. Write the records to Amazon Kinesis Data Firehose and configure Kinesis Data Firehose to deliver the data to Amazon S3 after 5 minutes. Set an expiration action at 30 days on the S3 bucket. C. Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write data to Amazon S3 just before the Lambda execution stops. D. Write the records to Amazon DynamoDB configured with a Time To Live (TTL) of 30 days. Read data using the GetItem or BatchGetItem call. E. Write the records to an Amazon ElastiCache for Redis. Configure the Redis append-only file (AOF) persistence logs to write to Amazon S3. Recover from the log if the ElastiCache instance has failed.

AB

You have a website that needs worldwide visibility, and as such, you have configured it as follows. It is hosted on 30 Amazon Elastic Compute Cloud instances. It is taking place in 15 areas over the world. Each area is represented by two instances. All instances are hosted in a public zone. Which of the following is the optimal configuration for your site in order to maintain availability with the least amount of downtime possible if one of the 15 areas loses network access for a lengthy period of time? (Select two.) A. Create a Route 53 Latency Based Routing Record set that resolves to an Elastic Load Balancer in each region and has the Evaluate Target Health flag set to true. B. Create a Route 53 failover routing policy and configure an active-passive failover. C. Create a Route 53 Failover Routing Policy and assign each resource record set a unique identifier and a relative weight. D. Create a Route 53 Geolocation Routing Policy that resolves to an Elastic Load Balancer in each region and has the Evaluate Target Health flag set to false.

AB It is best to use the latency routing policy when you have resources in multiple Amazon EC2 data centers that perform the same function and you want Amazon Route 53 to respond to DNS queries with the resources that provide the best latency. You could also use the failover routing policy (for public hosted zones only) when you want to configure an active-passive failover, in which one resource takes all traffic when it's available and the other resource takes all traffic when the first resource isn't available. Reference: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency

A business has built a bespoke tool that runs inside a Docker container and is utilized in its workflow. Each time the container code is changed, the organization must execute manual procedures to make the container image accessible for new workflow executions. The organization want to automate this procedure in order to save human work and to guarantee that a fresh container image is created whenever the tool code is changed. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select three.) A. Configure an Amazon ECR repository for the tool. Configure an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR. B. Configure an AWS CodeDeploy application that triggers an application version update that pulls the latest tool container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR. C. Configuration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR. D. Configure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy application update. E. Configure an Amazon EventBridge rule that triggers on commits to the AWS CodeCommit repository for the tool. Configure the event to trigger an update to the tool container image in Amazon ECR. Push the updated container image to Amazon ECR. F. Configure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build.

ACD

A firm is using Amazon CloudFront to host a public-facing worldwide application on AWS. The application interfaces with a third-party system. A solutions architect must guarantee that data is protected both in transit and at rest. Which combination of actions satisfies these criteria? (Select three.) A. Create a public certificate for the required domain in AWS Certificate Manager and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances. B. Acquire a public certificate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances. C. Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data when writing to Amazon EBS. D. Provision Amazon EBS encrypted volumes using AWS KMS. E. Use SSL or encrypt data while communicating with the external system using a VPN. F. Communicate with the external system using plaintext and use the VPN to encrypt the data in transit.

ACE

Over the last several years, AnyCompany has purchased a number of businesses. The CIO of AnyCompany wishes to maintain a separation of resources for each acquired firm. Additionally, the CIO wants to impose a chargeback scheme in which each firm pays for the AWS services it utilizes. The Solutions Architect is responsible for developing an Amazon Web Services architecture that enables AnyCompany to do the following: ✑ Establishing a comprehensive chargeback system to guarantee that each business pays for the resources it consumes. ✑ AnyCompany may pay for AWS services on behalf of all of its subsidiaries with a single invoice. ✑ Each acquired company's developers have exclusive access to its resources. ✑ Developers at an acquired firm should not have exclusive access to their company's resources. ✑ Across all firms, a single identity repository is utilized to verify Developers. Which of the following ways would be most appropriate to achieve these criteria? (Select two.) A. Create a multi-account strategy with an account per company. Use consolidated billing to ensure that AnyCompany needs to pay a single bill only. B. Create a multi-account strategy with a virtual private cloud (VPC) for each company. Reduce impact across companies by not creating any VPC peering links. As everything is in a single account, there will be a single invoice. Use tagging to create a detailed bill for each company. C. Create IAM users for each Developer in the account to which they require access. Create policies that allow the users access to all resources in that account. Attach the policies to the IAM user. D. Create a federated identity store against the companyגTM€s Active Directory. Create IAM roles with appropriate permissions and set the trust relationships with AWS and the identity store. Use AWS STS to grant users access based on the groups they belong to in the identity store. E. Create a multi-account strategy with an account per company. For billing purposes, use a tagging solution that uses a tag to identify the company that creates each resource.

AD

You must design a web application's transfer to AWS. The application is composed of Linux web servers that are configured to run a bespoke web server. You are needed to store the application's logs to a persistent place. Which alternatives are available for migrating the application to AWS? (Select two.) A. Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Service (S3). B. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a custom layer. Create custom recipes to install Docker and to deploy your Docker container using the Dockerfile. Create customer recipes to install and configure the application to publish the logs to Amazon CloudWatch Logs. C. Create Dockerfile for the application. Create an AWS OpsWorks stack consisting of a Docker layer that uses the Dockerfile. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon CloudWatch. D. Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. E. Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI.

AD

A Solutions Architect is tasked with the responsibility of developing a multi-account framework with ten existing accounts. The design must adhere to the following specifications: ✑ Consolidate all accounts into a single entity. ✑ Allow both the master and secondary accounts to have full access to the Amazon EC2 service. ✑ Reduce the effort associated with adding new secondary accounts. Which mix of phases should the solution include? (Select two.) A. Create an organization from the master account. Send invitations to the secondary accounts from the master account. Accept the invitations and create an OU. B. Create an organization from the master account. Send a join request to the master account from each secondary account. Accept the requests and create an OU. C. Create a VPC peering connection between the master account and the secondary accounts. Accept the request for the VPC peering connection. D. Create a service control policy (SCP) that enables full EC2 access, and attach the policy to the OU. E. Create a full EC2 access policy and map the policy to a role in each account. Trust every other account to assume the role.

AD There is a concept of Permission Boundary vs Actual IAM Policies. That is, we have a concept of ג€Allowג €vs ג€Grantג .€In terms of boundaries, we have the following three boundaries: 1. SCP 2. User/Role boundaries 3. Session boundaries (ex. AssumeRole ... ) In terms of actual permission granting, we have the following: 1. Identity Policies 2. Resource Policies

For its regulated financial services clients, an advising company is developing a safe data analytics solution. Users will submit their raw data to an Amazon S3 bucket for which they will be granted basic PutObject capabilities. Applications operating on an Amazon EMR cluster deployed in a VPC will examine the data. The company stipulates that the surroundings be completely disconnected from the internet. All data in transit must be encrypted using the firm's keys. Which steps should the Solutions Architect do in combination to satisfy the user's security requirements? (Select two.) A. Launch the Amazon EMR cluster in a private subnet configured to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS. B. Launch the Amazon EMR cluster in a private subnet configured to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and a NAT gateway to access AWS KMS. C. Launch the Amazon EMR cluster in a private subnet configured to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for CloudHSM. D. Configure the S3 endpoint policies to permit access to the necessary data buckets only. E. Configure the S3 bucket policies to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.

AE

A Solutions Architect is assisting a business that is particularly concerned about its IT expenditures and seeks to establish controls that will result in predictable monthly AWS spending. Which combination of procedures will assist the organization in controlling and monitoring its monthly AWS consumption in order to reach the lowest feasible cost? (Select three.) A. Implement an IAM policy that requires users to specify a ג˜€workloadג TM€tag for cost allocation when launching Amazon EC2 instances. B. Contact AWS Support and ask that they apply limits to the account so that users are not able to launch more than a certain number of instance types. C. Purchase all upfront Reserved Instances that cover 100% of the accountגTM€s expected Amazon EC2 usage. D. Place conditions in the usersג TM€IAM policies that limit the number of instances they are able to launch. E. Define ג˜€workloadג TM€as a cost allocation tag in the AWS Billing and Cost Management console. F. Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.

AEF

A business is considering moving an application from on-premises to AWS. The program presently utilizes an Oracle database, and the organization is willing to accept a temporary outage of one hour during the infrastructure changeover. The database engine will be switched to MySQL as part of the migration. A Solutions Architect must assess which AWS services can be leveraged to accomplish the migration with the least amount of effort and time. Which of the following meets the criteria? A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually. B. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually. C. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually. D. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.

B

A business is considering moving an application from on-premises to AWS. The program presently utilizes an Oracle database, and the organization is willing to accept a temporary outage of one hour during the infrastructure changeover. The database engine will be switched to MySQL as part of the migration. A Solutions Architect must assess which AWS services can be leveraged to accomplish the migration with the least amount of effort and time. Which of the following meets the criteria? A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually. B. Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually. C. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually. D. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.

B

A business is developing a REST API to facilitate information sharing with six of its US-based partners. The firm has developed a regional endpoint for the Amazon API Gateway. Each of the six partners will make a daily call to the API to update daily sales numbers. Following first deployment, the organization notices 1,000 requests per second from 500 unique IP addresses worldwide. The firm thinks this traffic is being generated by a botnet and want to safeguard its API at the lowest possible cost. Which method should the business adopt when it comes to securing its API? A. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than five requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can run the POST method. B. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than five requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the POST method. C. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method. D. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the six partners. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.

B

A business is operating a web application on Amazon EC2 instances on-demand in Auto Scaling groups that dynamically scale depending on specific KPIs. The firm finds that the m5.2xlarge instance size is best for the workload after comprehensive testing. Application data is stored on Amazon RDS instances db.r4.4xlarge that have been verified to be optimum. The web application's traffic rises at odd times throughout the day. What other cost-cutting measures should the organization adopt to further decrease expenses without jeopardizing the application's reliability? A. Double the instance count in the Auto Scaling groups and reduce the instance size to m5.large. B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running. C. Reduce the RDS instance size to db.r4.xlarge and add five equivalently sized read replicas to provide reliability. D. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.

B

A business maintains a website that allows people to post videos. According to company policy, submitted movies must be screened for prohibited material. A video is uploaded to Amazon S3 and a message containing the video's location is pushed to an Amazon SQS queue. This location is retrieved from Amazon SQS and the video is analyzed by a backend application. The video processing is computationally expensive and happens intermittently throughout the day. The website scales in response to demand. The video analysis program is limited in terms of the number of instances it may execute. Due to the fact that peak demand occurs over the holidays, the organization must increase the number of instances in the application during this period. All instances utilized are Amazon EC2 T2 instances that are presently available on demand. The firm wishes to minimize the present solution's cost. Which of the following is the MOST cost-effective solution? A. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Spot Instances to cover them while using Reserved Instances to cover peak demand. Use Amazon EC2 R4 and Amazon EC2 R5 Reserved Instances in an Auto Scaling group for the video analysis application. B. Keep the website on T2 instances. Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet for the video analysis application comprised of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances. C. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instances. Determine the minimum number of website instances required during off-peak times and use On-Demand Instances to cover them while using Spot capacity to cover peak demand. Use Spot Fleet for the video analysis application comprised of C4 and Amazon EC2 C5 instances. D. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instances. Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand. Use Spot Fleet for the video analysis application comprised of R4 and Amazon EC2 R5 instances.

B

A company's data center hosts a Microsoft SQL Server database and intends to move data to Amazon Aurora MySQL. The organization has already migrated triggers, stored procedures, and other schema objects to Aurora MySQL using the AWS Schema Conversion Tool. The database now includes 1 TB of data and is growing at a rate of less than 1 MB each day. A dedicated 1Gbps AWS Direct Connect link connects the company's data center to AWS. The organization want to move data to Aurora MySQL and reconfigure existing applications with little disruption. Which option best fulfills the needs of the business? A. Shut down applications over the weekend. Create an AWS DMS replication instance and task to migrate existing data from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint. B. Create an AWS DMS replication instance and task to migrate existing data and ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint. C. Create a database snapshot of SQL Server on Amazon S3. Restore the database snapshot from Amazon S3 to Aurora MySQL. Create an AWS DMS replication instance and task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint. D. Create a SQL Server native backup file on Amazon S3. Create an AWS DMS replication instance and task to restore the SQL Server backup file to Aurora MySQL. Create another AWS DMS task for ongoing replication from SQL Server to Aurora MySQL. Perform application testing and migrate the data to the new database endpoint.

B

A firm want to modify its internal cloud billing approach across all of its business divisions. Currently, the cloud governance team collaborates with the heads of each business unit to exchange information on total cloud expenditure. AWS Organizations is used to manage the company's many AWS accounts for each business unit. Organizations' current tagging standard covers the application, environment, and owner. The cloud governance team is looking for a centralized solution that would provide monthly reporting on each business unit's cloud expenditures. Additionally, the solution should give warnings when cloud expenditure surpasses a certain level. Which option meets these needs in the MOST cost-effective manner? A. Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in each account to create monthly reports for each business unit. B. Configure AWS Budgets in the organizationגTM€s master account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organizationגTM€s master account to create monthly reports for each business unit. C. Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use the AWS Billing and Cost Management dashboard in each account to create monthly reports for each business unit. D. Enable AWS Cost and Usage Reports in the organizationגTM€s master account and configure reports grouped by application, environment, and owner. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each business unitגTM€s email list.

B

A media firm has a programmatically created static web application. The firm utilizes a development process to create HTML content that is then uploaded to an Amazon S3 bucket that is served via Amazon CloudFront. A Build Account is used to house the build pipeline. A Distribution Account is used to store the S3 bucket and CloudFront distribution. The build pipeline uses an IAM role in the Build Account to upload the files to Amazon S3. The S3 bucket's bucket policy restricts CloudFront's ability to read items to those with an origin access identity (OAI). Attempts to access the application through the CloudFront URL fail with an HTTP 403 Access Denied response during testing. What should a solutions architect recommend to the organization in order to provide access to Amazon S3 items through CloudFront? A. Modify the S3 upload process in the Build Account to add the bucket-owner-full-control ACL to the objects at upload. B. Create a new cross-account IAM role in the Distribution Account with write access to the S3 bucket. Modify the build pipeline to assume this role to upload the files to the Distribution Account. C. Modify the S3 upload process in the Build Account to set the object owner to the Distribution Account. D. Create a new IAM role in the Distribution Account with read access to the S3 bucket. Configure CloudFront to use this new role as its OAI. Modify the build pipeline to assume this role when uploading files from the Build Account.

B

A news organization maintains a 30-terabyte collection of digital news footage. These movies are archived on tape in an on-premises tape library and accessed using a Media Asset Management (MAM) system. The business intends to use a MAM function to automatically improve the metadata for these movies and organize them into a searchable library. The business must be able to search for things in the video based on their appearance, such as objects, landscape, or people's faces. A catalog is provided that comprises the faces of those who appear in the films, along with a photograph of each individual. The firm want to move these movies to Amazon Web Services (AWS). The organization has a high-speed AWS Direct Connect connection and wishes to migrate video footage from its present file system to the MAM solution. How can these needs be accomplished with the LEAST amount of continuing management overhead and with the least amount of interruption to the current system as possible? A. Set up an AWS Storage Gateway, file gateway appliance on-premises. Use the MAM solution to extract the videos from the current archive and push them into the file gateway. Use the catalog of faces to build a collection in Amazon Rekognition. Build an AWS Lambda function that invokes the Rekognition Javascript SDK to have Rekognition pull the video from the Amazon S3 files backing the file gateway, retrieve the required metadata, and push the metadata into the MAM solution. B. Set up an AWS Storage Gateway, tape gateway appliance on-premises. Use the MAM solution to extract the videos from the current archive and push them into the tape gateway. Use the catalog of faces to build a collection in Amazon Rekognition. Build an AWS Lambda function that invokes the Rekognition Javascript SDK to have Amazon Rekognition process the video in the tape gateway, retrieve the required metadata, and push the metadata into the MAM solution. C. Configure a video ingestion stream by using Amazon Kinesis Video Streams. Use the catalog of faces to build a collection in Amazon Rekognition. Stream the videos from the MAM solution into Kinesis Video Streams. Configure Amazon Rekognition to process the streamed videos. Then, use a stream consumer to retrieve the required metadata, and push the metadata into the MAM solution. Configure the stream to store the videos in Amazon S3. D. Set up an Amazon EC2 instance that runs the OpenCV libraries. Copy the videos, images, and face catalog from the on-premises library into an Amazon EBS volume mounted on this EC2 instance. Process the videos to retrieve the required metadata, and push the metadata into the MAM solution, while also copying the video files to an Amazon S3 bucket.

B

A smartphone application has grown in popularity, with use increasing from a few hundred to millions of users. Users may take and post photographs of events occurring inside a city, as well as rate and suggest them. The patterns of data access are unexpected. The present application is hosted on Amazon EC2 instances protected by a load balancer (ALB). The application is having bottlenecks, and expenditures are fast increasing. Which modifications to the application architecture should a solutions architect make in order to save costs and increase performance? A. Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon S3 in an Infrequent Access storage class. B. Store static content in an Amazon S3 bucket using the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and the ALB. C. Place AWS Global Accelerator in front of the ALB. Migrate the static content to Amazon EFS, and then run an AWS Lambda function to resize the images during the migration process. D. Move the application code to AWS Fargate containers and swap out the EC2 instances with the Fargate containers.

B

An internal security assessment of a company's AWS resources discovered that numerous Amazon EC2 instances running Microsoft Windows workloads were missing key critical operating system fixes. A Solutions Architect has been tasked with the responsibility of rectifying current patch shortcomings and developing a procedure to guarantee that future patching needs are rapidly detected and addressed. The Solutions Architect has chosen AWS Systems Manager as the platform. To achieve corporate uptime requirements, it is critical that EC2 instance reboots do not occur concurrently on all Windows workloads. Which process will fully automate the fulfillment of these requirements? A. Add a Patch Group tag with a value of Windows Servers to all existing EC2 instances. Ensure that all Windows EC2 instances are assigned this tag. Associate the AWS-DefaultPatchBaseline to the Windows Servers patch group. Define an AWS Systems Manager maintenance window, conduct patching within it, and associate it with the Windows Servers patch group. Register instances with the maintenance window using associated subnet IDs. Assign the AWS- RunPatchBaseline document as a task within each maintenance window. B. Add a Patch Group tag with a value of Windows Servers to all existing EC2 instances. Ensure that all Windows EC2 instances are assigned this tag. Associate the AWS-WindowsPatchBaseline to the Windows Servers patch group. Create an Amazon CloudWatch Events rule configured to use a cron expression to schedule the execution of patching using the AWS Systems Manager run command. Assign the AWSRunWindowsPatchBaseline document as a task associated with the Windows Servers patch group. Create an AWS Systems Manager State Manager document to define commands to be executed during patch execution. C. Add a Patch Group tag with a value of either Windows Servers1 or Windows Servers2 to all existing EC2 instances. Ensure that all Windows EC2 instances are assigned this tag. Associate the AWS-DefaultPatchBaseline with both Windows Servers patch groups. Define two nonoverlapping AWS Systems Manager maintenance windows, conduct patching within them, and associate each with a different patch group. Register targets with specific maintenance windows using the Patch Group tags. Assign the AWS-RunPatchBaseline document as a task within each maintenance window. D. Add a Patch Group tag with a value of either Windows Servers1 or Windows Server2 to all existing EC2 instances. Ensure that all Windows EC2 instances are assigned this tag. Associate the AWS-WindowsPatchBaseline with both Windows Servers patch groups. Define two nonoverlapping AWS Systems Manager maintenance windows, conduct patching within them, and associate each with a different patch group. Assign the AWS-RunWindowsPatchBaseline document as a task within each maintenance window. Create an AWS Systems Manager State Manager document to define commands to be executed during patch execution.

B

In an on-premises data center, a business runs a two-tier web application. A single server running a stateful application serves as the application user. The program establishes a connection to a PostgreSQL database that is located on a different server. Due to the program's anticipated rapid growth, the organization is transferring the application and database to AWS. Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing will be used in the solution. Which option will provide a consistent user experience while allowing for scalability at the application and database tiers? A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled. B. Enable Aurora Auto Scaling for Aurora writes. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled. C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sickly sessions enabled. D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.

B

On AWS, a business is developing a new highly accessible web application. The application needs persistent and dependable communication between AWS application servers and an on-premises backend REST API. The backend connection between AWS and on-premises will be handled over a private virtual interface using an AWS Direct Connect connection. Amazon Route 53 will be utilized to handle the application's private DNS records for resolving the IP address for the backend REST API. Which architecture would be most likely to establish a secure connection to the backend API? A. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each backend endpoint and perform DNS-level failover. B. Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first Direct Connect connection. C. Install a second cross connect for the same Direct Connect connection from the same network carrier, and join both connections to the same link aggregation group (LAG) on the same private virtual interface. D. Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual private gateway as the Direct Connect connection.

B

You're constructing the network architecture for an Amazon VPC application server. All application instances will be accessible through the Internet and an on-premises network. The on-premises network is linked to the virtual private cloud using an AWS Direct Connect connection. How would you create routing to suit the needs outlined above? A. Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. B. Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. C. Configure a single routing table with two default routes: on to the Internet via an Internet gateway, the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in the VPC. D. Configure two routing tables: on that has a default router via the Internet gateway, and other that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet.

B

A user wishes to construct a public subnet in VPC for the purpose of launching an EC2 instance. While starting the instance, the user did not pick the option to provide a public IP address. Which of the following assertions is true in this scenario? A. The instance will always have a public DNS attached to the instance by default B. The user would need to create a default route to IGW in subnet's route table and then attach an elastic IP to the instance to connect from the internet C. The user can directly attach an elastic IP to the instance D. The instance will never launch if the public IP is not assigned

B A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. When the user is launching an instance he needs to select an option which attaches a public IP to the instance. If the user has not selected the option to attach the public IP, then it will only have a private IP when launched. The user cannot connect to the instance from the internet. If the user wants an elastic IP to connect to the instance from the internet, he should create an internet gateway and assign an elastic IP to instance. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/LaunchInstance.html

For compartmentalization purposes, an organization has established different components of a single program. At the moment, all components are hosted on a single Amazon EC2 instance. Due to security concerns, the business want to build two distinct SSLs for the distinct modules, despite the fact that it currently uses VPC. How is the organization capable of doing this with a single instance? A. You have to launch two instances each in a separate subnet and allow VPC peering for a single IP. B. Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses. C. Create a VPC instance which will have both the ACL and the security group attached to it and have separate rules for each IP address. D. Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.

B A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. With VPC the user can specify multiple private IP addresses for his instances. The number of network interfaces and private IP addresses that a user can specify for an instance depends on the instance type. With each network interface the organization can assign an EIP. This scenario helps when the user wants to host multiple websites on a single EC2 instance by using multiple SSL certificates on a single server and associating each certificate with a specific EIP address. It also helps in scenarios for operating network appliances, such as firewalls or load balancers that have multiple private IP addresses for each network interface. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html

MySecureData has five locations worldwide. They aim to extend their data centers to the point where their web server is hosted on AWS and each branch has its own database in a local data center. The firm wishes to connect to the data center using the user login. How can MySecureData use the AWS VPC to execute this scenario? A. Create five VPCs with the public subnet for the app server and setup the VPN gateway for each VPN to connect them individually. B. Use the AWS VPN CloudHub to communicate with multiple VPN connections. C. Use the AWS CloudGateway to communicate with multiple VPN connections. D. It is not possible to connect different data centers from a single VPC.

B A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. The user can create subnets as per the requirement within a VPC. If the user wants to connect VPC from his own data centre, he can setup a public and VPN only subnet which uses hardware VPN access to connect with his data centre. If the organization has multiple VPN connections, he can provide secure communication between sites using the AWS VPN CloudHub. The VPN CloudHub operates on a simple hub-and-spoke model that the user can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing internet connections who would like to implement a convenient, potentially low-cost huband- spoke model for primary or backup connectivity between remote offices. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CloudHub.html

Which of the following AWS Data Pipeline components defines your data management's business logic? A. Task Runner B. Pipeline definition C. AWS Direct Connect D. Amazon Simple Storage Service 9Amazon S3)

B A pipeline definition specifies the business logic of your data management. Reference: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html

You can analyze and process massive volumes of data with Amazon Elastic MapReduce (Amazon EMR). The cluster is handled using the opensource Hadoop platform. You've configured an application to execute Hadoop tasks. The program receives data from DynamoDB and creates a 100-terabyte temporary file. The whole process takes 30 minutes, and the job's output is saved on S3. Which of the following choices is the most cost effective in this situation? A. Use Spot Instances to run Hadoop jobs and configure them with EBS volumes for persistent data storage. B. Use Spot Instances to run Hadoop jobs and configure them with ethereal storage for output file storage. C. Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage. D. Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.

B AWS EC2 Spot Instances allow the user to quote his own price for the EC2 computing capacity. The user can simply bid on the spare Amazon EC2 instances and run them whenever his bid exceeds the current Spot Price. The Spot Instance pricing model complements the On-Demand and Reserved Instance pricing models, providing potentially the most cost-effective option for obtaining compute capacity, depending on the application. The only challenge with a Spot Instance is data persistence as the instance can be terminated whenever the spot price exceeds the bid price. In the current scenario a Hadoop job is a temporary job and does not run for a longer period. It fetches data from a persistent DynamoDB. Thus, even if the instance gets terminated there will be no data loss and the job can be re- run. As the output files are large temporary files, it will be useful to store data on ethereal storage for cost savings. Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

A business is establishing a backup and restoration mechanism for its on-premises system in AWS. The company requires High Availability (HA) and Disaster Recovery (DR), but is willing to pay a premium for a longer recovery time in order to save money. Which of the following configuration choices best accomplishes the cost-cutting and disaster recovery objectives? A. Setup pre-configured servers and create AMIs. Use EIP and Route 53 to quickly switch over to AWS from in premise. B. Setup the backup data on S3 and transfer data to S3 regularly using the storage gateway. C. Setup a small instance with AutoScaling; in case of DR start diverting all the load to AWS from on premise. D. Replicate on premise DB to EC2 at regular intervals and setup a scenario similar to the pilot light.

B AWS has many solutions for Disaster Recovery(DR) and High Availability(HA). When the organization wants to have HA and DR but are okay to have a longer recovery time they should select the option backup and restore with S3. The data can be sent to S3 using either Direct Connect, Storage Gateway or over the internet. The EC2 instance will pick the data from the S3 bucket when started and setup the environment. This process takes longer but is very cost effective due to the low pricing of S3. In all the other options, the EC2 instance might be running or there will be AMI storage costs. Thus, it will be a costlier option. In this scenario the organization should plan appropriate tools to take a backup, plan the retention policy for data and setup security of the data. Reference: http://d36cz9buwru1tt.cloudfront.net/AWS_Disaster_Recovery.pdf

A client has a website that displays all of the available bargains on the market. Generally, the site is loaded with five huge EC2 instances. However, a week before Thanksgiving, they come across a swarm of about 20 huge occurrences. The load at that time period changes throughout the day according to office hours. Which of the following options is both cost efficient and beneficial to the website's performance? A. Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule. B. Keep only 10 instances running and manually launch 10 instances every day during office hours. C. During the pre-vacation period setup 20 instances to run continuously. D. During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.

B AWS provides an on demand, scalable infrastructure. AWS EC2 allows the user to launch On- Demand instances and the organization should create an AMI of the running instance. When the organization is experiencing varying loads and the time of the load is not known but it is higher than the routine traffic it is recommended that the organization launches a few instances beforehand and then setups AutoScaling with policies which scale up and down as per the EC2 metrics, such as Network I/O or CPU utilization. If the organization keeps all 10 additional instances as a part of the AutoScaling policy sometimes during a sudden higher load it may take time to launch instances and may not give an optimal performance. This is the reason it is recommended that the organization keeps an additional 5 instances running and the next 5 instances scheduled as per the AutoScaling policy for cost effectiveness.

Your business intends to use Amazon Web Services to host a major contribution website (AWS). You expect a significant and unpredictable volume of traffic, which will result in several database writes. To ensure that you do not lose any writes to an AWS-hosted database. Which service are you going to use? A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. C. Amazon ElastiCache to store the writes until the writes are committed to the database. D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.

B Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components performing different tasks, without losing messages or requiring each component to be always available. Amazon SQS makes it easy to build a distributed, decoupled application, working in close conjunction with the Amazon Elastic Compute Cloud (Amazon EC2) and the other AWS infrastructure web services. What can I do with Amazon SQS? Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them. This allows you to quickly build message queuing applications that can be run on any computer on the internet. Since Amazon SQS is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability. This lets you focus on building sophisticated message-based applications, without worrying about how the messages are stored and managed. You can use Amazon SQS with software applications in various ways. For example, you can: Integrate Amazon SQS with other AWS infrastructure web services to make applications more reliable and flexible. Use Amazon SQS to create a queue of work where each message is a task that needs to be completed by a process. One or many computers can read tasks from the queue and perform them. Build a microservices architecture, using queues to connect your microservices. Keep notifications of significant events in a business process in an Amazon SQS queue. Each event can have a corresponding message in a queue, and applications that need to be aware of the event can read and process the messages.

PIOPS has been setup on an EBS volume by a user. The user is not receiving the highest possible throughput. Which of the following cannot be a factor impacting the EBS volume's I/O performance? A. EBS bandwidth of dedicated instance exceeding the PIOPS B. EBS volume size C. EC2 bandwidth D. Instance type is not EBS optimized

B If the user is not experiencing the expected IOPS or throughput that is provisioned, ensure that the EC2 bandwidth is not the limiting factor, the instance is EBS- optimized (or include 10 Gigabit network connectivity) and the instance type EBS dedicated bandwidth exceeds the IOPS more than he has provisioned. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html

True or false: You may reuse the same logical ID multiple times in a CloudFormation template to refer to resources in different areas of the template. A. True, a logical ID can be used several times to reference the resources in other parts of the template. B. False, a logical ID must be unique within the template. C. False, you can mention a resource only once and you cannot reference it in other parts of a template. D. False, you cannot reference other parts of the template.

B In AWS CloudFormation, the logical ID must be alphanumeric (A-Za-z0-9) and unique within the template. You use the logical name to reference the resource in other parts of the template. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-resources.html

You construct an Amazon Elastic File System (EFS) file system in your Virtual Private Cloud and mount targets for the file system (VPC). Determine the initial permissions that you may provide to your file system's group root. A. write-execute-modify B. read-execute C. read-write-modify D. read-write

B In Amazon EFS, when a file system and mount targets are created in your VPC, you can mount the remote file system locally on your Amazon Elastic Compute Cloud (EC2) instance. You can grant permissions to the users of your file system. The initial permissions mode allowed for Amazon EFS are: ✑ read-write-execute permissions to the owner root ✑ read-execute permissions to the group root read-execute permissions to others Reference: http://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html

A business is considering migrating multiple software testing lab settings. Each lab's test runs are managed using a variety of bespoke tools. The laboratories execute software tests on immutable infrastructure, and the findings are saved in a highly available SQL database cluster. While recreating the proprietary tools in its whole is beyond the scope of the migration project, the organization want to optimize workloads during the move. Which technique for application migration satisfies this requirement? A. Re-host B. Re-platform C. Re-factor/re-architect D. Retire

B Reference: https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/

A corporation has a policy requiring that all Amazon EC2 instances running databases be on the same subnets in the same shared VPC. Administrators are required to adhere to security compliance rules and are not permitted to log in directly to the shared account. In AWS Organizations, all corporate accounts are members of the same organization. As the firm expands, the number of accounts will swiftly rise. A solutions architect creates a resource share in the shared account using AWS Resource Access Manager. Which arrangement is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements? A. Add the VPC to the resource share. Add the account IDs as principals B. Add all subnets within the VPC to the resource share. Add the account IDs as principals C. Add all subnets within the VPC to the resource share. Add the organization as a principal D. Add the VPC to the resource share. Add the organization as a principal

B Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management/

A financial institution with many departments want to migrate from on-premises to the AWS Cloud. The organization must continue to use an onpremises Active Directory (AD) solution for centralized access management. Each department should be permitted to establish AWS accounts with preconfigured networking and access to a limited number of authorized services. Account administrator rights are not granted for departments. What actions should a solutions architect take to ensure compliance with these security requirements? A. Configure AWS Identity and Access Management (IAM) with a SAML identity provider (IdP) linked to the on-premises Active Directory, and create a role to grant access. Configure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates to configure the member account networking. B. Deploy an AWS Control Tower landing zone. Create an AD Connector linked to the on-premises Active Directory. Change the identity source in AWS Single Sign-On to use Active Directory. Allow department administrators to use Account Factory to create new member accounts and networking. Grant the departments AWS power user permissions on the created accounts. C. Deploy an Amazon Cloud Directory. Create a two-way trust relationship with the on-premises Active Directory, and create a role to grant access. Set up an AWS Service Catalog to use AWS CloudFormation templates to create the new member accounts and networking. Use IAM roles to allow access to approved AWS services. D. Configure AWS Directory Service for Microsoft Active Directory with AWS Single Sign-On. Join the service to the on-premises Active Directory. Use AWS CloudFormation to create new member accounts and networking. Use IAM roles to allow access to approved AWS services.

B Reference: https://d1.awsstatic.com/whitepapers/aws-overview.pdf (46)

Each day, a corporation obtains a continuous supply of ten million data records from one hundred thousand sources. These entries are stored in a MySQL database hosted by Amazon RDS. A query must return the 30-day daily average of a data source. There are double the number of readers as there are writes. The gathered data may be queried for a single source ID at a time. How can the Solutions Architect enhance the solution's dependability and cost effectiveness? A. Use Amazon Aurora with MySQL in a Multi-AZ mode. Use four additional read replicas. B. Use Amazon DynamoDB with the source ID as the partition key and the timestamp as the sort key. Use a Time to Live (TTL) to delete data after 30 days. C. Use Amazon DynamoDB with the source ID as the partition key. Use a different table each day. D. Ingest data into Amazon Kinesis using a retention period of 30 days. Use AWS Lambda to write data records to Amazon ElastiCache for read access.

B Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

A Solutions Architect must use a blue/green deployment process to change an application environment inside AWS Elastic Beanstalk. The Solutions Architect constructs an environment similar to the one used by the application and then deploys it to the new environment. What is the next step in completing the update? A. Redirect to the new environment using Amazon Route 53 B. Select the Swap Environment URLs option C. Replace the Auto Scaling launch configuration D. Update the DNS records to point to the green environment

B Reference: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

An company runs an application on Amazon EC2 instances, which need access from several developers in order to execute changes. The firm intends to apply certain security best practices for instance access. Which of the following proposals will not contribute to its security improvement in this manner? A. Disable the password based login for all the users. All the users should use their own keys to connect with the instance securely. B. Create an IAM policy allowing only IAM users to connect to the EC2 instances with their own SSH key. C. Create a procedure to revoke the access rights of the individual user when they are not required to connect to EC2 instance anymore for the purpose of application configuration. D. Apply the latest patch of OS and always keep it updated.

B Since AWS is a public cloud any application hosted on EC2 is prone to hacker attacks. It becomes extremely important for a user to setup a proper security mechanism on the EC2 instances. A few of the security measures are listed below: ✑ Always keep the OS updated with the latest patch ✑ Always create separate users with in OS if they need to connect with the EC2 instances, create their keys and disable their password ✑ Create a procedure using which the admin can revoke the access of the user when the business work on the EC2 instance is completed. . Lock down unnecessary ports ✑ Audit any proprietary applications that the user may be running on the EC2 instance. Provide temporary escalated privileges, such as sudo for users who need to perform occasional privileged tasks IAM is useful when users are required to work with AWS resources and actions, such as launching an instance. It is not useful in this case because it does not manage who can connect via RDP or SSH with an instance. Reference: http://aws.amazon.com/articles/1233/

A business is attempting to create a virtual private cloud (VPC) with auto scaling. Which of the following configuration procedures is not necessary to configure AWS VPC with Auto Scaling? A. Configure the Auto Scaling group with the VPC ID in which instances will be launched. B. Configure the Auto Scaling Launch configuration with multiple subnets of the VPC to enable the Multi AZ feature. C. Configure the Auto Scaling Launch configuration which does not allow assigning a public IP to instances. D. Configure the Auto Scaling Launch configuration with the VPC security group.

B The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an Auto Scaling group. Before creating the Auto Scaling group it is recommended that the user creates the Launch configuration. Since it is a VPC, it is recommended to select the parameter which does not allow assigning a public IP to the instances. The user should also set the VPC security group with the Launch configuration and select the subnets where the instances will be launched in the AutoScaling group. The HA will be provided as the subnets may be a part of separate AZs. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/autoscalingsubnets.html

Is it possible to utilize Provisioned IOPS on RDS instances deployed in a VPC? A. Yes, they can be used only with Oracle based instances. B. Yes, they can be used for all RDS instances. C. No D. Yes, they can be used only with MySQL based instances.

B The basic building block of Amazon RDS is the DB instance. DB instance storage comes in three types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD). When you buy a server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon RDS, these are split apart so that you can scale them independently. So, for example, if you need more CPU, less IOPS, or more storage, you can easily allocate them. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/RDSFAQ.PIOPS.html

Which of the following should be done prior to utilizing AWS Direct Connect to connect to Amazon Virtual Private Cloud (Amazon VPC)? A. Provide a public Autonomous System Number (ASN) to identify your network on the Internet. B. Create a virtual private gateway and attach it to your Virtual Private Cloud (VPC). C. Allocate a private IP address to your network in the 122.x.x.x range. D. Provide a public IP address for each Border Gateway Protocol (BGP) session.

B To connect to Amazon Virtual Private Cloud (Amazon VPC) by using AWS Direct Connect, you must first do the following: Provide a private Autonomous System Number (ASN) to identify your network on the Internet. Amazon then allocates a private IP address in the 169.x.x.x range to you. Create a virtual private gateway and attach it to your VPC. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

Which of the following commands, when used with the AWS CLI for AWS CloudFormation, produces a description of the given resource in the specified stack? A. describe-stack-events B. describe-stack-resource C. create-stack-resource D. describe-stack-returns

B awsclicloudformation describe-stack-resource Description Returns a description of the specified resource in the specified stack. For deleted stacks, describe-stack-resource returns resource information for up to 90 days after the stack has been deleted. Reference: http://docs.aws.amazon.com/cli/latest/reference/cloudformation/describe-stack-resource.html

A big organization is transferring its entire IT infrastructure to AWS. Each business unit within the organization has its own AWS account that provides development and test environments. New accounts will be required shortly to accommodate production demands. Finance needs a centralized payment system but must also keep insight into each group's expenditure in order to distribute expenses. The security team needs a centralized solution for monitoring and controlling IAM use throughout the whole organization. Which of the following combinations of alternatives meets the company's requirements with the LEAST amount of effort? (Select two.) A. Use a collection of parameterized AWS CloudFormation templates defining common IAM permissions that are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model. B. Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarchy. Invite the existing accounts to join the organization and create new accounts using Organizations. C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks. D. Enable all features of AWS Organizations and establish appropriate service control policies that filter IAM permissions for sub-accounts. E. Consolidate all of the companyגTM€s AWS accounts into a single AWS account. Use tags for billing purposes and IAMגTM€s Access Advisor feature to enforce the least privilege model.

BD Reference: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html

A business owns 50 AWS accounts that are members of an AWS Organization. Each account comprises numerous virtual private clouds (VPCs). The firm wishes to deploy AWS Transit Gateway to link the virtual private clouds (VPCs) in each member account. The organization want to automate the process of generating a new VPC and transit gateway attachment whenever a new member account is generated. Which combination of actions will satisfy these criteria? (Select three.) A. From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager. B. From the management account, share the transit gateway with member accounts by using an AWS Organizations SCP. C. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID. D. Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a peering transit gateway attachment in a member account. Share the attachment with the transit gateway in the management account by using a transit gateway service-linked role. E. From the management account, share the transit gateway with member accounts by using AWS Service Catalog.

BDE

A business makes use of many AWS accounts. DNS records for Amazon Route 53 are maintained in a private hosted zone in Account A. Account B hosts the company's applications and databases. A solutions architect will create a new VPC and deploy a two-tier application. To ease the setup, the CNAME record set db.example.com was generated in a private hosted zone for Amazon Route 53 for the Amazon RDS endpoint. The application failed to start during deployment. Troubleshooting showed that the Amazon EC2 instance's database server, db.example.com, is not reachable. The solutions architect validated that the Route 53 record set had been built appropriately. Which measures should the solutions architect take in combination to remedy this issue? (Select two.) A. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instanceגTM€s private IP in the private hosted zone. B. Use SSH to connect to the application tier EC2 instance. Add an RDS endpoint IP address to the /etc/resolv conf file. C. Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B. D. Create a private hosted zone for the example com domain in Account B. Configure Route 53 replication between AWS accounts. E. Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.

BE

You've installed a web application under the domain name.example.com that targets a worldwide audience across many AWS Regions. You choose to utilize Route53's latency-based routing to deliver web requests to users from the area with the lowest latency. You configure weighted record sets connected with two web servers in distinct Availability Zones per region to ensure business continuity in the case of server interruption. When doing a disaster recovery test, you will observe that when all web servers in one of the regions are disabled, Route53 does not instantly redirect all users to the other region. What may be going on? (Select two.) A. Latency resource record sets cannot be used in combination with weighted resource record sets. B. You did not setup an HTTP health check to one or more of the weighted resource record sets associated with me disabled web servers. C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region. D. One of the two working web servers in the other region did not pass its HTTP health check. E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the servers.

BE How Health Checks Work in Complex Amazon Route 53 Configurations Checking the health of resources in complex configurations works much the same way as in simple configurations. However, in complex configurations, you use a combination of alias resource record sets (including weighted alias, latency alias, and failover alias) and nonalias resource record sets to build a decision tree that gives you greater control over how Amazon Route 53 responds to requests. For more information, see How Health Checks Work in Simple Amazon Route 53 Configurations - For example, you might use latency alias resource record sets to select a region close to a user and use weighted resource record sets for two or more resources within each region to protect against the failure of a single endpoint or an Availability Zone. The following diagram shows this configuration. Here's how Amazon EC2 and Amazon Route 53 are configured: You have Amazon EC2 instances in two regions, us-east-1 and ap-southeast-2. You want Amazon Route 53 to respond to queries by using the resource record sets in the region that provides the lowest latency for your customers, so you create a latency alias resource record set for each region. (You create the latency alias resource record sets after you create resource record sets for the individual Amazon EC2 instances.) Within each region, you have two Amazon EC2 instances. You create a weighted resource record set for each instance. The name and the type are the same for both of the weighted resource record sets in each region. When you have multiple resources in a region, you can create weighted or failover resource record sets for your resources. You can also create even more complex configurations by creating weighted alias or failover alias resource record sets that, in turn, refer to multiple resources. Each weighted resource record set has an associated health check. The IP address for each health check matches the IP address for the corresponding resource record set. This isn't required, but it's the most common configuration. For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes. You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targetsג"€the weighted resource record setsג"€and respond accordingly. https://i.imgur.com/8KTvvfX.png The preceding diagram illustrates the following sequence of events: Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region. Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon Route 53 checks the health of the selected weighted resource record set. The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also is unhealthy. Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record set for ap-southeast-2. Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set. The health check passed, so Amazon Route 53 returns the applicable value in response to the query. What Happens When You Associate a Health Check with an Alias Resource Record Set? You can associate a health check with an alias resource record set instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it's generally more useful if Amazon Route 53 responds to queries based on the health of the underlying resourcesג"€the HTTP servers, database servers, and other resources that your alias resource record sets refer to. For example, suppose the following configuration: You assign a health check to a latency alias resource record set for which the alias target is a group of weighted resource record sets. You set the value of Evaluate Target Health to Yes for the latency alias resource record set. In this configuration, both of the following must be true before Amazon Route 53 will return the applicable value for a weighted resource record set: The health check associated with the latency alias resource record set must pass. At least one weighted resource record set must be considered healthy, either because it's associated with a health check that passes or because it's not associated with a health check. In the latter case, Amazon Route 53 always considers the weighted resource record set healthy. https://i.imgur.com/M8uGY56.png If the health check for the latency alias resource record set fails, Amazon Route 53 stops responding to queries using any of the weighted resource record sets in the alias target, even if they're all healthy. Amazon Route 53 doesn't know the status of the weighted resource record sets because it never looks past the failed health check on the alias resource record set. What Happens When You Omit Health Checks? In a complex configuration, it's important to associate health checks with all of the non-alias resource record sets. Let's return to the preceding example, but assume that a health check is missing on one of the weighted resource record sets in the us-east-1 region: https://i.imgur.com/CjJjvMG.png Here's what happens when you omit a health check on a non-alias resource record set in this configuration: Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region. Amazon Route 53 looks up the alias target for the latency alias resource record set, and checks the status of the corresponding health checks. The health check for one weighted resource record set failed, so that resource record set is omitted from consideration. The other weighted resource record set in the alias target for the us-east-1 region has no health check. The corresponding resource might or might not be healthy, but without a health check, Amazon Route 53 has no way to know. Amazon Route 53 assumes that the resource is healthy and returns the applicable value in response to the query. What Happens When You Set Evaluate Target Health to No? In general, you also want to set Evaluate Target Health to Yes for all of the alias resource record sets. In the following example, all of the weighted resource record sets have associated health checks, but Evaluate Target Health is set to No for the latency alias resource record set for the us-east-1 region: https://i.imgur.com/UmDd2kM.png Here's what happens when you set Evaluate Target Health to No for an alias resource record set in this configuration: Amazon Route 53 receives a query for example.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region. Amazon Route 53 determines what the alias target is for the latency alias resource record set, and checks the corresponding health checks. They're both failing. Because the value of Evaluate Target Health is No for the latency alias resource record set for the us-east-1 region, Amazon Route 53 must choose one resource record set in this branch instead of backing out of the branch and looking for a healthy resource record set in the ap-southeast-2 region.

A Solutions Architect is tasked with the responsibility of creating a system that would gather and store data from 2,000 internet-connected sensors. Each sensor generates one kilobyte of data every second. Within a few seconds of being submitted to the system, the data must be accessible for processing and retained eternally for analysis. Which option is the MOST cost-effective in terms of data collection and storage? A. Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function to write each record to an object in Amazon S3 with a prefix that organizes the records by hour and hashes the recordגTM€s key. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3. B. Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3. C. Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table. Analyze recent data from the DynamoDB table and historical data from Amazon S3 D. Put each record into an object in Amazon S3 with a prefix what organizes the records by hour and hashes the recordגTM€s key. Use S3 lifecycle management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing the data in Amazon S3

C

A big real estate firm is considering the cost-effective addition of a location-based alert to their current mobile application. Currently, the application's backend architecture is hosted on AWS. Users who subscribe up to this service will get smartphone notifications about real-estate otters in their vicinity. To be relevant, warnings must be sent within a few minutes. The present mobile application is used by 5 million people in the United States. Which of the following architectural recommendations would you offer to a client? A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application. B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications location through carrier connection: RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application. C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application. D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.

C

A business has chosen to acquire Amazon EC2 Reserved Instances. A solutions architect is charged with building a solution that restricts Reserved Instance purchases to the master account in AWS Organizations. Purchases of Reserved Instances should be prohibited for current and future member accounts. Which solution will satisfy these criteria? A. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization. B. Create a new organizational unit (OU) Move all current member accounts to the new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU. C. Create an AWS Config rule event that triggers automation that will terminate any Reserved Instances launched by member accounts. D. Create two new organizational units (OUs): OU1 and OU2. Move all member accounts to OU2 and the master account to OU1. Create an SCP with the Allow effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to OU1.

C

A business has developed a web application for securely uploading images and movies to an Amazon S3 bucket. Only authorized users are permitted to publish material, according to the business. The program creates a presigned URL that may be used to upload files using a web browser interface. The majority of customers experience sluggish upload speeds for files bigger than 100 MB. What can a Solutions Architect do to optimize these uploads while guaranteeing that only authorized users may add content? A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload objects. B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser interface use API Gateway instead of the presigned URL to upload API objects. C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the presigned URL. Have the browser interface upload the objects to this URL using the S3 multipart upload API. D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST methods for the CloudFront cache behavior. Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution.

C

A business maintains a media catalog that includes information for each item in the catalog. AWS Lambda-based program extracts several forms of information from media objects. The collected metadata is saved in an Amazon ElastiCache for Redis cluster using a set of rules. Extraction is carried out in batches and takes around 40 minutes to finish. The updating procedure is manually initiated anytime the metadata extraction rules are modified. The corporation wishes to speed up the process of extracting information from its media inventory. To do this, a solutions architect divided the single Lambda function for metadata extraction into Lambda functions for each category of information. Which further measures does the solutions architect need to take to ensure compliance with the requirements? A. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of media items and executes a metadata extraction workflow for each one. B. Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the compute environment. Create a Lambda function to retrieve a list of media items and write each item to the job queue. C. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Configure the SQS queue as an input to the Step Functions workflow. D. Create a Lambda function to retrieve a list of media items and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.

C

A business uses Amazon ECS to manage its containerized batch tasks. The jobs are scheduled by providing an Amazon S3 bucket with a container image, a task specification, and the necessary data. Container pictures may be customized for each task. Because it is critical to perform tasks as rapidly as possible, uploading job artifacts to the S3 bucket prompts the job to run instantly. At times, no jobs may be operating at all. However, tasks of any kind may be submitted to the IT Operations team without prior notification. Job definitions give information about the CPU and memory resources required for the job. Which solution enables batch tasks to run as rapidly as feasible once they are scheduled? A. Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs. B. Schedule the jobs directly on EC2 instances. Use Reserved Instances for the baseline minimum load, and use On-Demand Instances in an Auto Scaling group to scale up the platform based on demand. C. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs. D. Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Spot Instances in an Auto Scaling group to scale the platform based on demand. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.

C

A business with many accounts is presently configured in a manner that violates the following security governance policies: ✑ Prevent any Amazon EC2 instance from being accessed on port 22. ✑ Ensure that resources have billing and application tags. ✑ All Amazon EBS volumes should be encrypted. A solutions architect want to implement preventative and investigative controls, as well as alerts about a particular resource, when policy deviations occur. Which solution, if any, should be implemented by the solutions architect? A. Create an AWS CodeCommit repository containing policy-compliant AWS CloudFormation templates. Create an AWS Service Catalog portfolio. Import the CloudFormation templates by attaching the CodeCommit repository to the portfolio. Restrict users across all accounts to items from the AWS Service Catalog portfolio. Use AWS Config managed rules to detect deviations from the policies. Configure an Amazon CloudWatch Events rule for deviations, and associate a CloudWatch alarm to send notifications when the TriggeredRules metric is greater than zero. B. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account. Restrict users across all accounts to AWS Service Catalog products. Share a compliant portfolio to other accounts. Use AWS Config managed rules to detect deviations from the policies. Configure an Amazon CloudWatch Events rule to send a notification when a deviation occurs. C. Implement policy-compliant AWS CloudFormation templates for each account, and ensure that all provisioning is completed by CloudFormation. Configure Amazon Inspector to perform regular checks against resources. Perform policy validation and write the assessment output to Amazon CloudWatch Logs. Create a CloudWatch Logs metric filter to increment a metric when a deviation occurs. Configure a CloudWatch alarm to send notifications when the configured metric is greater than zero. D. Restrict users and enforce least privilege access using AWS IAM. Consolidate all AWS CloudTrail logs into a single account. Send the CloudTrail logs to Amazon Elasticsearch Service (Amazon ES). Implement monitoring, alerting, and reporting using the Kibana dashboard in Amazon ES and with Amazon SNS.

C

A client of AWS has a public blogging website. Each month, the site's users submit two million blog entries. The typical blog post is 200 KB in size. Six months after publication, the rate of access to blog postings is minimal, and people seldom visit a blog item one year after publication. Additionally, blog postings get frequent updates for the first three months after publication, but cease to receive updates after six months. The client wants to utilize CloudFront to decrease the time it takes for his users to load. Which of the following would you suggest to the customer? A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity B. Create a CloudFront distribution with ג€US Europeג €price class for US/Europe users and a different CloudFront distribution with ג€All Edge Locationsג €for the remaining users. C. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors. D. Create a CloudFront distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.

C

A corporation has created a new version of a popular video game and wants to make it freely accessible to the public. The new release bundle weighs in at roughly 5 GB. The organization distributes previous versions using a Linux-based, publicly accessible FTP server located onpremises. The business anticipates that people worldwide will download the latest update. The firm is looking for a solution that optimizes download speed while maintaining minimal transfer costs, independent of the user's location. Which solutions will satisfy these criteria? A. Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group. Configure an FTP service on the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package. B. Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group. Configure an FTP service on each of the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package. C. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game files to the S3 bucket. Use Amazon CloudFront for the website. Publish the game download URL for users to download the package. D. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game files to the S3 bucket. Set Requester Pays for the S3 bucket. Publish the game download URL for users to download the package.

C

A firm developed an application using AWS Lambda and AWS CloudFormation. The web application's most recent production version included a bug that resulted in a several-minute outage. A solutions architect's deployment methodology must be adjusted to accommodate a canary release. Which solution will satisfy these criteria? A. Create an alias for every new deployed version of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load. B. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load. C. Create a version for every new deployed Lambda function. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load. D. Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.

C

AWS-hosted ecommerce websites use an Amazon RDS for MySQL database instance with General Purpose SSD storage. According to demand, the developers selected a suitable instance type and set 100 GB of storage with a reasonable amount of free space. For a few weeks, the website operated normally until a marketing effort was started. Users complained lengthy wait times and time outs on the second day of the promotion. According to Amazon CloudWatch measurements, both reads and writes to the database instance were having lengthy response times. CloudWatch data indicate that between 40% and 50% of the CPU and RAM are being used, while adequate free storage space is still available. There is no sign of database connection difficulties in the application server logs. What may be the underlying reason of the marketing campaign's failure? A. It exhausted the I/O credit balance due to provisioning low disk storage during the setup phase. B. It caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries. C. It exhausted the maximum number of allowed connections to the database instance. D. It exhausted the network bandwidth available to the RDS for MySQL DB instance.

C

Every hour, a corporation runs a batch analysis on their primary transactional database, which is running on an RDS MySQL server, in order to feed their central Data Warehouse, which is running on Redshift. Their batch apps are quite sluggish to execute. They must then update the top management dashboard with the new data once the batch is complete. The dashboard is generated by another on-premises system that is now initiated in response to a manually received email indicating that an update is necessary. Because the on-premises system is handled by another team, it cannot be updated. How would you improve this situation to address performance concerns while automating as much of the process as possible? A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard B. Replace RDS with Redshift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

C

When constructing the CloudFormation template, an administrator uses Amazon CloudFormation to deploy a three-tier web application that comprises of a web tier and an application layer and makes use of Amazon DynamoDB for storage. Which of the following would provide access to the DynamoDB tables to the application instance without disclosing API credentials? A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile. B. Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table. C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance. D. Create an identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.

C

Your firm is looking to develop an order fulfillment process for selling a customized device that takes an average of three to four days to build with some orders lasting up to six months. On your first day, you anticipate receiving ten orders per day. After six months, 1,000 orders per day are possible, and 10,000 orders per day after twelve months. Orders are verified for consistency before being transferred to your manufacturing facility for production quality control, packing, and payment processing. Employees may compel the process to repeat a step if the product does not satisfy quality requirements at any level of the process. Customers are advised through email of the progress of their purchases and any important concerns, such as payment failure. Your website is hosted on AWS Elastic Beanstalk, and customer data and orders are stored on an RDS MySQL instance. How can you execute the order fulfillment procedure while maintaining the reliability of email delivery? A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers. B. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers. C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers. D. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

C

Your organization has made a significant quantity of aerial picture data available on S3. Previously, in your on-premises environment, you employed a specialized set of servers to handle this data and spoke with the servers through Rabbit MQ - an open source messaging system. After processing, the data would be archived on tape and transported elsewhere. Your boss advised you to maintain the present design and to cut costs by using AWS archive storage and communications services. Which of the two is correct? A. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage. B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage. C. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier. D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

C

A user has constructed a virtual private network (VPC) using the CIDR 20.0.0.0/16. In this VPC, the user has built a single subnet with the CIDR 20.0.0.0/16. The user is attempting to build another subnet for CIDR 20.0.0.1/24 using the same VPC. What is the outcome of this scenario? A. The VPC will modify the first subnet CIDR automatically to allow the second subnet IP range B. The second subnet will be created C. It will throw a CIDR overlaps error D. It is not possible to create a subnet with the same CIDR as VPC

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. The user can create a subnet with the same size of VPC. However, he cannot create any other subnet since the CIDR of the second subnet will conflict with the first subnet. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

When a user authenticates using Amazon Cognito, their credentials are bootstrapped in a multi-step process. Amazon Cognito supports two distinct authentication processes for public providers. Which of the following is the correct order of the two flows? A. Authenticated and non-authenticated B. Public and private C. Enhanced and basic D. Single step and multistep

C A user authenticating with Amazon Cognito will go through a multi-step process to bootstrap their credentials. Amazon Cognito has two different flows for authentication with public providers: enhanced and basic. Reference: http://docs.aws.amazon.com/cognito/devguide/identity/concepts/authentication-flow/

The CFO of a business want to provide access to one of his workers to just the AWS use report page. Which of the following IAM policy statements grants access to the AWS consumption report page to the user? A. "Effect": "Allow", "Action": ["Describe"], "Resource": "Billing" B. "Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*" C. "Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*" D. "Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"

C AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the CFO wants to allow only AWS usage report page access, the policy for that IAM user will be as given below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "aws-portal:ViewUsage" ], "Resource": "*" } ] } Reference: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-permissions-ref.html

In June, one of the AWS account owners had a huge setback when his account was stolen and the intruder destroyed all of his data. This has a significant impact on the company. Which of the following procedures would have been ineffective in avoiding this action? A. Setup an MFA for each user as well as for the root account user. B. Take a backup of the critical data to offsite / on premise. C. Create an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions. D. Do not share the AWS access and secret access keys with others as well do not store it inside programs, instead use IAM roles.

C AWS security follows the shared security model where the user is as much responsible as Amazon. If the user wants to have secure access to AWS while hosting applications on EC2, the first security rule to follow is to enable MFA for all users. This will add an added security layer. In the second step, the user should never give his access or secret access keys to anyone as well as store inside programs. The better solution is to use IAM roles. For critical data of the organization, the user should keep an offsite/ in premise backup which will help to recover critical data in case of security breach. It is recommended to have AWS AMIs and snapshots as well as keep them at other regions so that they will help in the DR scenario. However, in case of a data security breach of the account they may not be very helpful as hacker can delete that. Therefore, creating an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions, would not have helped in preventing this action.

Multiple IAM users have been created by the owner of an AWS account. One of these IAM users, John, has access to CloudWatch but not to EC2 services. John has configured an alert action that terminates EC2 instances when their CPU usage falls below a predefined threshold. What happens if the CPU Utilization rate of an EC2 instance falls below the threshold John has specified? A. CloudWatch will stop the instance when the action is executed B. Nothing will happen. John cannot set an alarm on EC2 since he does not have the permission. C. Nothing will happen. John can setup the action, but it will not be executed because he does not have EC2 access through IAM policies. D. Nothing will happen because it is not possible to stop the instance using the CloudWatch alarm

C Amazon CloudWatch alarms watch a single metric over a time period that the user specifies and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The user can setup an action which stops the instances when their CPU utilization is below a certain threshold for a certain period of time. The EC2 action can either terminate or stop the instance as part of the EC2 action. If the IAM user has read/write permissions for Amazon CloudWatch but not for Amazon EC2, he can still create an alarm. However, the stop or terminate actions will not be performed on the Amazon EC2 instance. Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingAlarmActions.html

True or False: In Amazon ElastiCache Redis replication groups, you may swap the roles of the cache nodes inside the replication group for performance tuning purposes, with the main and one of the replicas switching responsibilities. A. True, however, you get lower performance. B. FALSE C. TRUE D. False, you must recreate the replication group to improve performance tuning.

C In Amazon ElastiCache, a replication group is a collection of Redis Cache Clusters, with one primary read-write cluster and up to five secondary, read-only clusters, which are called read replicas. You can change the roles of the cache clusters within the replication group, with the primary cluster and one of the replicas exchanging roles. You might decide to do this for performance tuning reasons. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Replication.Redis.Groups.html

PIOPS was used to establish a MySQL RDS instance. Which of the following statements will assist the user in comprehending the benefits of PIOPS? A. The user can achieve additional dedicated capacity for the EBS I/O with an enhanced RDS option B. It uses a standard EBS volume with optimized configuration the stacks C. It uses optimized EBS volumes and optimized configuration stacks D. It provides a dedicated network bandwidth between EBS and RDS

C RDS DB instance storage comes in two types: standard and provisioned IOPS. Standard storage is allocated on the Amazon EBS volumes and connected to the user's DB instance. Provisioned IOPS uses optimized EBS volumes and an optimized configuration stack. It provides additional, dedicated capacity for the EBS I / O. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

A firm is migrating a mission-critical application to AWS. It is a three-tier web application built on top of an Oracle database. Encryption of data in transit and at rest is required. The database now stores 12 terabytes of data. Internal network connection to the source Oracle database is permitted, and the organization want to decrease operating expenses whenever feasible by using AWS Managed Services. The web and application layers have been completely transferred. The database comprises a small number of tables and a straightforward structure based on primary keys; nonetheless, it contains a large number of Binary Large Object (BLOB) fields. Due to license constraints, it was not able to utilize the database's native replication capabilities. Which database migration option will have the LEAST effect on the availability of the application? A. Provision an Amazon RDS for Oracle instance. Host the RDS database within a virtual private cloud (VPC) subnet with internet access, and set up the RDS database as an encrypted Read Replica of the source database. Use SSL to encrypt the connection between the two databases. Monitor the replication performance by watching the RDS ReplicaLag metric. During the application maintenance window, shut down the on-premises database and switch over the application connection to the RDS instance when there is no more replication lag. Promote the Read Replica into a standalone database instance. B. Provision an Amazon EC2 instance and install the same Oracle database software. Create a backup of the source database using the supported tools. During the application maintenance window, restore the backup into the Oracle database running in the EC2 instance. Set up an Amazon RDS for Oracle instance, and create an import job between the databases hosted in AWS. Shut down the source database and switch over the database connections to the RDS instance when the job is complete. C. Use AWS DMS to load and replicate the dataset between the on-premises Oracle database and the replication instance hosted on AWS. Provision an Amazon RDS for Oracle instance with Transparent Data Encryption (TDE) enabled and configure it as a target for the replication instance. Create a customer-managed AWS KMS master key to set it as the encryption key for the replication instance. Use AWS DMS tasks to load the data into the target RDS instance. During the application maintenance window and after the load tasks reach the ongoing replication phase, switch the database connections to the new database. D. Create a compressed full database backup of the on-premises Oracle database during an application maintenance window. While the backup is being performed, provision a 10 Gbps AWS Direct Connect connection to increase the transfer speed of the database backup files to Amazon S3, and shorten the maintenance window period. Use SSL/TLS to copy the files over the Direct Connect connection. When the backup files are successfully copied, start the maintenance window, and rise any of the Amazon RDS supported tools to import the data into a newly provisioned Amazon RDS for Oracle instance with encryption enabled. Wait until the data is fully loaded and switch over the database connections to the new database. Delete the Direct Connect connection to cut unnecessary charges.

C Reference: https://aws.amazon.com/blogs/apn/oracle-database-encryption-options-on-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.htm (DMS in transit encryption) https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html

A business has an application that creates a 15-minute weather prediction with a resolution of 1 billion distinct locations, each around 20 bytes in size (20 Gigabytes per forecast). Each hour, roughly 5 million times (1,400 requests per second) the prediction data is accessed worldwide, and up to ten times more during severe weather occurrences. Each update overwrites the predicted data. Users of the present weather forecast program anticipate receiving results to inquiries in less than two seconds. Which architecture satisfies the specified request rate and response time requirements? A. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes. B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the CloudFront distribution. C. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes. D. Store forecast locations in Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the CloudFront distribution.

C Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/

Two Amazon EC2 instances are used by an organization: ✑ The first is to operate an order processing and inventory management application. The second is the operation of a queueing mechanism. Several thousand orders are made each second at various seasons of the year. When the queueing system went down, several orders were lost. Additionally, the organization's inventory application has an inaccurate amount of items due to duplicate processing of certain orders. What should be done to guarantee that the apps are capable of handling the growing volume of orders? A. Put the ordering and inventory applications into their own AWS Lambda functions. Have the ordering application write the messages into an Amazon SQS FIFO queue. B. Put the ordering and inventory applications into their own Amazon ECS containers, and create an Auto Scaling group for each application. Then, deploy the message queuing server in multiple Availability Zones. C. Put the ordering and inventory applications into their own Amazon EC2 instances, and create an Auto Scaling group for each application. Use Amazon SQS standard queues for the incoming orders, and implement idempotency in the inventory application. D. Put the ordering and inventory applications into their own Amazon EC2 instances. Write the incoming orders to an Amazon Kinesis data stream. Configure AWS Lambda to poll the stream and update the inventory application.

C Reference: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html

On Amazon EC2 instances, a three-tier web application is running. Cron daemons are used to initiate programs that gather and transfer web server, application, and database logs to a centralized place every hour. Occasionally, scaling events or unanticipated outages caused instances to terminate before the most recent logs were gathered, resulting in the loss of log data. Which of the following methods of collecting and keeping log files is the MOST responsible? A. Update the cron jobs to run every 5 minutes instead of every hour to reduce the possibility of log messages being lost in an outage. B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Run Command to invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage. C. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs. Configure the agent with a batch count of 1 to reduce the possibility of log messages being lost in an outage. D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into each running instance and invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.

C Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

A user wishes to start numerous Amazon EC2 instances identical to the currently operating instance. Which of the following settings is not duplicated by Amazon EC2 when the user selects the option "Launch more like this" in the launch wizard? A. Termination protection B. Tenancy setting C. Storage D. Shutdown behavior

C The Amazon EC2 console provides a "Launch more like this" wizard option that enables the user to use a current instance as a template for launching other instances. This option automatically populates the Amazon EC2 launch wizard with certain configuration details from the selected instance. The following configuration details are copied from the selected instance into the launch wizard: AMI ID Instance type - Availability Zone, or the VPC and subnet in which the selected instance is located Public IPv4 address. If the selected instance currently has a public IPv4 address, the new instance receives a public IPv4 address - regardless of the selected instance's default public IPv4 address setting. For more information about public IPv4 addresses, see Public IPv4 Addresses and External DNS Hostnames. Placement group, if applicable - IAM role associated with the instance, if applicable Shutdown behavior setting (stop or terminate) Termination protection setting (true or false) CloudWatch monitoring (enabled or disabled) Amazon EBS-optimization setting (true or false) Tenancy setting, if launching into a VPC (shared or dedicated) Kernel ID and RAM disk ID, if applicable User data, if specified - Tags associated with the instance, if applicable Security groups associated with the instance The following configuration details are not copied from your selected instance; instead, the wizard applies their default settings or behavior: (VPC only) Number of network interfaces: The default is one network interface, which is the primary network interface (eth0). Storage: The default storage configuration is determined by the AMI and the instance type. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html

An business intends to host an application on the AWS Virtual Private Cloud. The business need dedicated instances. However, an AWS specialist recommended the business against using dedicated instances with VPC due to a few constraints inherent in the architecture. Which of the following assertions is not a restriction on the use of dedicated instances in a VPC? A. All instances launched with this VPC will always be dedicated instances and the user cannot use a default tenancy model for them. B. It does not support the AWS RDS with a dedicated tenancy VPC. C. The user cannot use Reserved Instances with a dedicated tenancy model. D. The EBS volume will not be on the same tenant hardware as the EC2 instance though the user has configured dedicated tenancy.

C The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Dedicated instances are Amazon EC2 instances that run in a Virtual Private Cloud (VPC) on hardware that is dedicated to a single customer. The client's dedicated instances are physically isolated at the host hardware level from instances that are not dedicated instances as well as from instances that belong to other AWS accounts. All instances launched with the dedicated tenancy model of VPC will always be dedicated instances. Dedicated tenancy has a limitation that it may not support a few services, such as RDS. Even the EBS will not be on dedicated hardware. However, the user can save some cost as well as reserve some capacity by using a Reserved Instance model with dedicated tenancy. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html

An Auto Scaling group is operating at the targeted capacity of five instances and is triggered to raise capacity by one by a Cloudwatch Alarm. A five-minute cool-down time follows. After two minutes, Cloudwatch sends another trigger reducing the required capacity by one. How many occurrences will there be at the conclusion of four minutes? A. 4 B. 5 C. 6 D. 7

C The cool down period is the time difference between the end of one scaling activity (can be start or terminate) and the start of another one (can be start or terminate). During the cool down period, Auto Scaling does not allow the desired capacity of the Auto Scaling group to be changed by any other CloudWatch alarm. Thus, in this case the trigger from the second alarm will have no effect. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#healthcheck

You've been tasked with the responsibility of creating various AWS Data Pipeline schedules for distinct pipeline operations. Which of the following would complete this assignment successfully? A. Creating multiple pipeline definition files B. Defining multiple pipeline definitions in your schedule objects file and associating the desired schedule to the correct activity via its schedule field C. Defining multiple schedule objects in your pipeline definition file and associating the desired schedule to the correct activity via its schedule field D. Defining multiple schedule objects in the schedule field

C To define multiple schedules for different activities in the same pipeline, in AWS Data Pipeline, you should define multiple schedule objects in your pipeline definition file and associate the desired schedule to the correct activity via its schedule field. As an example of this, it could allow you to define a pipeline in which log files are stored in Amazon S3 each hour to drive generation of an aggregate report once a day. Reference: https://aws.amazon.com/datapipeline/faqs/

Determine which of the following statements is accurate about the use of an IAM role to provide rights to apps running on Amazon EC2 instances. A. When AWS credentials are rotated; developers have to update only the root Amazon EC2 instance that uses their credentials. B. When AWS credentials are rotated, developers have to update only the Amazon EC2 instance on which the password policy was applied and which uses their credentials. C. When AWS credentials are rotated, you don't have to manage credentials and you don't have to worry about long-term security risks. D. When AWS credentials are rotated, you must manage credentials and you should consider precautions for long-term security risks.

C Using IAM roles to grant permissions to applications that run on EC2 instances requires a bit of extra configuration. Because role credentials are temporary and rotated automatically, you don't have to manage credentials, and you don't have to worry about long-term security risks. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html

Can you put an IfExists condition at the end of a Null condition in an IAM policy? A. Yes, you can add an IfExists condition at the end of a Null condition but not in all Regions. B. Yes, you can add an IfExists condition at the end of a Null condition depending on the condition. C. No, you cannot add an IfExists condition at the end of a Null condition. D. Yes, you can add an IfExists condition at the end of a Null condition.

C Within an IAM policy, IfExists can be added to the end of any condition operator except the Null condition. It can be used to indicate that conditional comparison needs to happen if the policy key is present in the context of a request; otherwise, it can be ignored. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html

A firm may have many lines of business (LOBs), all of which report to the parent corporation. The company's solutions architect has been tasked with developing a solution that meets the following requirements: ✑ Create a single AWS invoice for all of its LOBs' AWS accounts. ✑ The invoice should break out the expenses for each LOB account. ✑ Allow for the restriction of services and functionality in LOB accounts in accordance with the company's governance policy. ✑ Regardless of the governance model, each LOB account should be granted full administrator access. Which measures should the solutions architect do in combination to satisfy these requirements? (Select two.) A. Use AWS Organizations to create an organization in the parent account for each LOB. Then, invite each LOB account to the appropriate organization. B. Use AWS Organizations to create a single organization in the parent account. Then, invite each LOBגTM€s AWS account to pin the organization. C. Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB as appropriate. D. Create an SCP that allows only approved services and features, then apply the policy to the LOB accounts. Enable consolidated billing in the parent accountגTM€s billing console and link the LOB accounts.

CD

You're developing the Internet connection for your virtual private cloud. The Web servers must be able to be accessed over the Internet. The application's architecture must be extremely available. Which options are worth considering? (Select two.) A. Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address. B. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution. C. Place all your web servers behind ELB. Configure a Route53 CNMIE to point to the ELB DNS name. D. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover. E. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.

CD

On AWS, a business hosts an IoT platform. IoT sensors located around the enterprise deliver data to the company's Node.js API servers, which are hosted on Amazon EC2 instances behind an Application Load Balancer. The data is saved on a 4 TB General Purpose SSD disk in an Amazon RDS MySQL DB instance. The number of sensors placed in the field by the firm has risen over time and is likely to continue to expand dramatically. The API servers are always overcrowded, and RDS measurements indicate that write latency is high. Which of the following changes, taken collectively, will permanently cure the difficulties and permit future expansion as additional sensors are provided, while maintaining the platform's cost-effectiveness? (Select two.) A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volumeגTM€s IOPS B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data D. Use AWS-X-Ray to analyze and debug application issues and add more API servers to match the load E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance

CE

A business makes use of many AWS accounts. The firm has a common service account as well as multiple more accounts for various initiatives. A team has a virtual private cloud (VPC) in a project account. The team want to link this VPC to the corporate network through a shared services account's AWS Direct Connect gateway. The team want to automate the association of the virtual private gateway with the Direct Connect gateway by using a previously tested AWS Lambda function during the deployment of its VPC networking stack. By using AWS Security Token Service, the Lambda function code may take a role (AWS STS). The team is deploying their infrastructure using AWS CloudFormation. Which combination of actions will satisfy these criteria? (Select three.) A. Deploy the Lambda function to the project account. Update the Lambda functionגTM€s IAM role with the directconnect:* permission. B. Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:* permission. Add the sts:AssumeRole permission to the IAM role that is associated with the Lambda function in the shared services account. C. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the project account. D. Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda functionגTM€s IAM role with the directconnect:* permission. E. Create a cross-account IAM role in the shared services account that grants the sts:AssumeRole permission to the Lambda function with the directconnect:* permission acting as a resource. Add the sts:AssumeRole permission with this cross-account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account. F. Add a custom resource to the CloudFormation networking stack that references the Lambda function in the shared services account.

CEF

A big organization will have several business divisions. Each business unit has several Amazon Web Services (AWS) accounts for a variety of objectives. The company's CIO recognizes that each business unit has data that might benefit from sharing with other departments. Around 10 PB of data must be shared with consumers across 1,000 AWS accounts. Because the data is private, part of it should be restricted to people with specified job functions. A portion of the data is utilized to improve the throughput of computationally demanding activities, such as simulations. Account numbers fluctuate constantly as a result of new initiatives, acquisitions, and divestitures. A Solutions Architect has been requested to create a system that would enable the company's workers to share data for usage in AWS. Which technique will provide scalable secure data sharing? A. Store the data in a single Amazon S3 bucket. Create an IAM role for every combination of job type and business unit that allows for appropriate read/write access based on object prefixes in the S3 bucket. The roles should have trust policies that allow the business unitגTM€s AWS accounts to assume their roles. Use IAM in each business unitגTM€s AWS account to prevent them from assuming roles for a different job type. Users get credentials to access the data by using AssumeRole from their business unitגTM€s AWS account. Users can then use those credentials with an S3 client. B. Store the data in a single Amazon S3 bucket. Write a bucket policy that uses conditions to grant read and write access where appropriate, based on each userגTM€s business unit and job type. Determine the business unit with the AWS account accessing the bucket and the job type with a prefix in the IAM userגTM€s name. Users can access data by using IAM credentials from their business unitגTM€s AWS account with an S3 client. C. Store the data in a series of Amazon S3 buckets. Create an application running in Amazon EC2 that is integrated with the companyגTM€s identity provider (IdP) that authenticates users and allows them to download or upload data through the application. The application uses the business unit and job type information in the IdP to control what users can upload and download through the application. The users can access the data through the applicationגTM€s API. D. Store the data in a series of Amazon S3 buckets. Create an AWS STS token vending machine that is integrated with the companyגTM€s identity provider (IdP). When a user logs in, have the token vending machine attach an IAM policy that assumes the role that limits the userגTM€s access and/or upload only the data the user is authorized to access. Users can get credentials by authenticating to the token vending machineגTM€s website or API and then use those credentials with an S3 client.

D

A business maintains a publicly accessible application that makes use of a Java-based web service through a RESTful API. It is hosted on Apache Tomcat on a single server in a data center that maintains a constant CPU usage of 30%. With the introduction of a new product, use of the API is projected to rise tenfold. The company requires a seamless migration of the application to AWS and need it to expand to meet demand. The organization has already chosen to reroute traffic through Amazon Route 53 and CNAME records. How can we meet these criteria with the LEAST amount of work possible? A. Use AWS Elastic Beanstalk to deploy the Java web service and enable Auto Scaling. Then switch the application to use the new web service. B. Lift and shift the Apache server to the cloud using AWS SMS. Then switch the application to direct web service traffic to the new instance. C. Create a Docker image and migrate the image to Amazon ECS. Then change the application code to direct web service queries to the ECS container. D. Modify the application to call the web service via Amazon API Gateway. Then create a new AWS Lambda Java function to run the Java web service code. After testing, change API Gateway to use the Lambda function.

D

A business now stores data on Amazon EBS and Amazon RDS. The organization aims to deploy a pilot light strategy for disaster recovery in a separate Amazon Web Services Region. The company's RTO is six hours and its RPO is twenty-four hours. Which option would meet the criteria at the lowest possible cost? A. Use AWS Lambda to create daily EBS and RDS snapshots, and copy them to the disaster recovery region. Use Amazon Route 53 with activepassive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region. B. Use AWS Lambda to create daily EBS and RDS snapshots, and copy them to the disaster recovery region. Use Amazon Route 53 with activeactive failover configuration. Use Amazon EC2 in an Auto Scaling group configured in the same way as in the primary region. C. Use Amazon ECS to handle long-running tasks to create daily EBS and RDS snapshots, and copy to the disaster recovery region. Use Amazon Route 53 with active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region. D. Use EBS and RDS cross-region snapshot copy capability to create snapshots in the disaster recovery region. Use Amazon Route 53 with active-active failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region.

D

A corporation manages several accounts using AWS Organizations and a single OU called Production. Each account is a member of the Production organizational unit. Administrators regulate access to restricted services by using deny list SCPs in the organization's root. The corporation just purchased a new business unit and invited the new unit's current AWS account to join. Once the new business unit's administrators were onboarded, they realized that they were unable to amend existing AWS Config rules to conform to the company's requirements. Which solution enables administrators to make adjustments while still enforcing existing regulations without incurring extra long-term maintenance costs? A. Remove the organizationגTM€s root SCPs that limit access to AWS Config. Create AWS Service Catalog products for the companyגTM€s standard AWS Config rules and deploy them throughout the organization, including the new account. B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the new account to the Production OU when adjustments to AWS Config are complete. C. Convert the organizationגTM€s root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporally apply an SCP to the organizationגTM€s root that allows AWS Config actions for principals only in the new account. D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organizationגTM€s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.

D

A corporation uses Amazon SQS and AWS Lambda to operate an ordering system, with each order received as a JSON message. The firm just held a marketing event that resulted in a tenfold rise in order volume. With this rise, the ordering system began to exhibit the following undesirable behaviors: ✑ Lambda errors during order processing result in queue backlogs. ✑ Multiple times, the same orders have been processed. A Solutions Architect has been tasked with resolving current ordering system difficulties and including the following resilience features: ✑ Retain orders that are difficult for analysis. ✑ Notify you if the number of errors exceeds a predefined threshold. How is the Solutions Architect to comply with these requirements? A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification. B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification. C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification. D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification.

D

A corporation want to move from an on-premises Oracle data warehouse to Amazon Redshift a 30 TB Oracle data warehouse. The organization converted the schema of the previous data warehouse to an Amazon Redshift schema using the AWS Schema Conversion Tool (AWS SCT). Additionally, the organization employed a migration evaluation report to highlight manual activities that needed to be completed. The organization must move the data to the new Amazon Redshift cluster in time for a two-week data freeze. Between the on-premises data warehouse and AWS, the sole network link is a 50 Mbps internet connection. Which migration approach satisfies these criteria? A. Create an AWS Database Migration Service (AWS DMS) replication instance. Authorize the public IP address of the replication instance to reach the data warehouse through the corporate firewall. Create a migration task to run at the beginning of the fata freeze period. B. Install the AWS SCT extraction agents on the on-premises servers. Define the extract, upload, and copy tasks to send the data to an Amazon S3 bucket. Copy the data into the Amazon Redshift cluster. Run the tasks at the beginning of the data freeze period. C. Install the AWS SCT extraction agents on the on-premises servers. Create a Site-to-Site VPN connection. Create an AWS Database Migration Service (AWS DMS) replication instance that is the appropriate size. Authorize the IP address of the replication instance to be able to access the on-premises data warehouse through the VPN connection. D. Create a job in AWS Snowball Edge to import data into Amazon S3. Install AWS SCT extraction agents on the on-premises servers. Define the local and AWS Database Migration Service (AWS DMS) tasks to send the data to the Snowball Edge device. When the Snowball Edge device is returned to AWS and the data is available in Amazon S3, run the AWS DMS subtask to copy the data to Amazon Redshift.

D

A firm is developing a voting system for a popular television program; viewers will watch the performances and then vote for their favorite artist on the show's website. The site is likely to attract millions of visits within a short amount of time once the event concludes. Visitors will first check in using their Amazon.com credentials and then vote. Following the conclusion of voting, the website will reveal the vote totals. The firm has to design the site in such a way that it can manage the increased traffic while keeping a high level of performance, but also wants to keep expenses low. Which of the following design patterns should they employ? A. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance. B. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote. C. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table. D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.

D

A multimedia corporation must cost-effectively supply video-on-demand (VOD) material to its members. The video files vary in size from 1 to 15 GB and are often watched regularly during the first six months after production, after which access is significantly reduced. The firm mandates that all video files be instantly accessible to members. There are already around 30,000 files, and the business estimates that number will double over time. Which approach is the MOST cost-effective for distributing the company's on-demand content? A. Store the video files in an Amazon S3 bucket using S3 Intelligent-Tiering. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin. B. Use AWS Elemental MediaConvert and store the adaptive bitrate video files in Amazon S3. Configure an AWS Elemental MediaPackage endpoint to deliver the content from Amazon S3. C. Store the video files in Amazon Elastic File System (Amazon EFS) Standard. Enable EFS lifecycle management to move the video files to EFS Infrequent Access after 6 months. Create an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer to deliver the content from Amazon EFS. D. Store the video files in Amazon S3 Standard. Create S3 Lifecycle rules to move the video files to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months and to S3 Glacier Deep Archive after 1 year. Use Amazon CloudFront to deliver the content with the S3 bucket as the origin.

D

A retailer's application saves invoice files in an Amazon S3 bucket and information about the files in an Amazon DynamoDB table. Both us-east-1 and eu-west-1 versions of the application software are supported. US-east-1 is the location of the S3 bucket and DynamoDB database. The firm want to safeguard its data and maintain connection to any Region. Which of the following options satisfies these criteria? A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Enable versioning on the S3 bucket. B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-region replication from us-east-1 to eu-west-1. Set up MFA delete on the S3 bucket in us-east-1. C. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucket. Implement strict ACLs on the S3 bucket. D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Set up S3 cross-region replication from us-east-1 to eu-west-1.

D

A software as a service (SaaS) provider provides private legal firms and the public sector with a cloud-based document management system. A local government customer recently issued a directive prohibiting the storage of highly secret data outside the country. The company's chief information officer engages a solutions architect to verify that the application can adapt to this new demand. Additionally, the CIO want to establish an appropriate backup strategy for these records, since they are not being backed up. Which solution satisfies these criteria? A. Tag documents that are not highly confidential as regular in Amazon S3. Create individual S3 buckets for each user. Upload objects to each user's bucket. Set S3 bucket replication from these buckets to a central S3 bucket in a different AWS account and AWS Region. Configure an AWS Lambda function triggered by scheduled events in Amazon CloudWatch to delete objects that are tagged as secret in the S3 backup bucket. B. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Create a cross- region S3 bucket in a separate AWS account. Set proper IAM roles to allow cross-region permissions to the S3 buckets. Configure an AWS Lambda function triggered by Amazon CloudWatch scheduled events to copy objects that are tagged as secret to the S3 backup bucket and objects tagged as normal to the cross-region S3 bucket. C. Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region. Configure an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as secret into the S3 bucket in the same AWS Region. D. Tag highly confidential documents as secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to a different AWS Region. Create an Amazon CloudWatch Events rule for new S3 objects tagged as secret to trigger an AWS Lambda function to replicate them into a separate bucket in the same AWS Region.

D

A solutions architect is using an AWS CloudFormation template to develop infrastructure as code for a two-tier web application. The web frontend application will be deployed as an Auto Scaling group on Amazon EC2 instances. The backend database will be an instance of Amazon RDS for MySQL DB. Every 60 days, the database password will be changed. How can the solutions architect handle the application's database credentials most securely? A. Provide the database password as a parameter in the CloudFormation template. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the password parameter using the Ref intrinsic function. Store the password on the EC2 instances. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function. B. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Configure the application to retrieve the password from Secrets Manager when needed. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using a dynamic reference. C. Create a new AWS Secrets Manager secret resource in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the secret resource using the Ref intrinsic function. Reference the secret resource for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Ref intrinsic function. D. Create a new AWS Systems Manager Parameter Store parameter in the CloudFormation template to be used as the database password. Create an initialization script in the Auto Scaling group's launch configuration UserData property to reference the parameter. Reference the parameter for the value of the MasterUserPassword property in the AWS::RDS::DBInstance resource using the Fn::GetAtt intrinsic function.

D

A user attempts to build a PIOPS EBS volume with a capacity of 4000 IOPS and 100 GB. AWS does not permit the creation of this volume by the user. What might be the underlying reason of this? A. PIOPS is supported for EBS higher than 500 GB size B. The maximum IOPS supported by EBS is 3000 C. The ratio between IOPS and the EBS volume is higher than 30 D. The ratio between IOPS and the EBS volume is lower than 50

D

During a company's multi-year data center move from several proprietary data centers to AWS, a hybrid network architecture must be implemented. Currently, data centers are connected by private fiber. NAT cannot be employed due to the peculiar nature of older apps. Numerous apps will need connectivity to other applications in both the data centers and AWS throughout the transfer time. Which alternative provides a secure and highly available hybrid network architecture that enables high bandwidth and multi-region deployment post-migration? A. Use AWS Direct Connect to each data center from different ISPs, and configure routing to failover to the other data centerגTM€s Direct Connect if one fails. Ensure that no VPC CIDR blocks overlap one another or the on-premises network. B. Use multiple hardware VPN connections to AWS from the on-premises data center. Route different subnet traffic through different VPN connections. Ensure that no VPC CIDR blocks overlap one another or the on-premises network. C. Use a software VPN with clustering both in AWS and the on-premises data center, and route traffic through the cluster. Ensure that no VPC CIDR blocks overlap one another or the on-premises network. D. Use AWS Direct Connect and a VPN as backup, and configure both to use the same virtual private gateway and BGP. Ensure that no VPC CIDR blocks overlap one another or the on-premises network.

D

In an on-premises data center, a company runs 103 line-of-business apps on virtual machines. Many of the programs are simple PHP, Java, or Ruby web applications that are no longer maintained and get minimal traffic. Which migration strategy should be utilized to transfer these apps to AWS with the LEAST expensive infrastructure? A. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer. B. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2. C. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer. D. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.

D

On-premises, a business is operating a huge application. Microsoft.NET serves as the web server platform, while Apache Cassandra serves as the database. The organization want to transition this application to AWS in order to increase the dependability of the service. Additionally, the IT staff wishes to decrease the amount of time spent on capacity management and infrastructure maintenance. The development team is willing and able to make code modifications to facilitate the move. Which design is the EASIEST to handle after the migration? A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode. B. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration. C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuration. Migrate the existing Cassandra database to Amazon DynamoDB. D. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NET. Migrate the existing Cassandra database to Amazon DynamoDB.

D

A user has constructed a virtual private network (VPC) using the CIDR 20.0.0.0/16. By accident, the user built a subnet with CIDR 20.0.0.0/16. The user is attempting to establish a new subnet inside the CIDR 20.0.1.0/24 range. How does the user go about creating a second subnet? A. The user can modify the first subnet CIDR with AWS CLI B. The user can modify the first subnet CIDR from the console C. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based on the second subnet's CIDR D. It is not possible to create a second subnet with overlapping IP CIDR without deleting the first subnet.

D A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside the subnet. The user can create a subnet with the same size of VPC. However, he cannot create any other subnet since the CIDR of the second subnet will conflict with the first subnet. The user cannot modify the CIDR of a subnet once it is created. Thus, in this case if required, the user has to delete the subnet and create new subnets. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

You're experimenting with creating stacks in CloudFormation using JSON templates in order to have a better understanding of them. You've put up approximately five or six stacks and are now wondering whether you're getting charged for them. What is AWS's policy for charging for stack resources? A. You are not charged for the stack resources if they are not taking any traffic. B. You are charged for the stack resources for the time they were operating (but not if you deleted the stack within 30 minutes) C. You are charged for the stack resources for the time they were operating (but not if you deleted the stack within 60 minutes) D. You are charged for the stack resources for the time they were operating (even if you deleted the stack right away)

D A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack's AWS CloudFormation template. A stack, for instance, can include all the resources required to run a web application, such as a web server, a database, and networking rules. If you no longer require that web application, you can simply delete the stack, and all of its related resources are deleted. You are charged for the stack resources for the time they were operating (even if you deleted the stack right away). Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html

Is it possible to connect a Direct Connect connection straight to the Internet? A. Yes, this can be done if you pay for it. B. Yes, this can be done only for certain regions. C. Yes D. No

D AWS Direct Connect is a network service that provides an alternative to using the Internet to utilize AWS cloud service. Hence, a Direct Connect link cannot be connected to the Internet directly. Reference: http://aws.amazon.com/directconnect/faqs/

Which system is utilized during the boot procedure by Amazon Machine Images paravirtual (PV) virtualization? A. PV-BOOT B. PV-AMI C. PV-WORM D. PV-GRUB

D Amazon Machine Images that use paravirtual (PV) virtualization use a system called PV-GRUB during the boot process. PV-GRUB is a paravirtual boot loader that runs a patched version of GNU GRUB 0.97. When you start an instance, PV-GRUB starts the boot process and then chain loads the kernel specified by your image's menu.lst file. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedKernels.html

Which of the following statements concerning Auto Scaling is NOT true? A. Auto Scaling can launch instances in different Azs. B. Auto Scaling can work with CloudWatch. C. Auto Scaling can launch an instance at a specific time. D. Auto Scaling can launch instances in different regions.

D Auto Scaling provides an option to scale up and scale down based on certain conditions or triggers from Cloudwatch. A user can configure such that Auto Scaling launches instances across Azs, but it cannot span across regions. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-dg.pdf

A user is setting PIOPS for MySQL RDS. What should be the minimum amount of database storage that the user should provide? A. 1 TB B. 50 GB C. 5 GB D. 100 GB

D If the user is trying to enable PIOPS with MySQL RDS, the minimum size of storage should be 100 GB. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.html

Which of the following commands, when used in conjunction with the Amazon ElastiCache CLI, allows you to examine all ElastiCache instance events over the last 24 hours? A. elasticache-events --duration 24 B. elasticache-events --duration 1440 C. elasticache-describe-events --duration 24 D. elasticache describe-events --source-type cache-cluster --duration 1440

D In Amazon ElastiCache, the code "aws elasticache describe-events --source-type cache-cluster -- duration 1440" is used to list the cache-cluster events for the past 24 hours (1440 minutes). Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ECEvents.Viewing.html

You're developing a cross-platform web application for Amazon Web Services. The application will be hosted on Amazon EC2 instances and accessible through PCs. Tablet computers and smartphones Windows, MacOS, iOS, and Android are all supported access platforms. Different platform types need distinct sticky session and SSL certificate configurations. Which of the following best characterizes the most cost-effective and high-performance architectural configuration? A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC. B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform. C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform. D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

D One ELB cannot handle different SSL certificates but since we are using sticky sessions it must be handled at the ELB level. SSL could be handled on the EC2 instances only with TCP configured ELB, ELB supports sticky sessions only in HTTP/HTTPS configurations. The way the Elastic Load Balancer does session stickiness is on a HTTP/HTTPS listener is by utilizing an HTTP cookie. If SSL traffic is not terminated on the Elastic Load Balancer and is terminated on the back-end instance, the Elastic Load Balancer has no visibility into the HTTP headers and therefore can not set or read any of the HTTP headers being passed back and forth. Reference: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html

To track the call activity of an IVR (Interactive Voice Response) system, you require a permanent and durable storage. The average length of a call is 2-3 minutes. Each tracked call may be active or inactive. Each minute, an external application must be informed of the list of presently active calls. Normally, there are a few calls each second, but once a month, there is a brief period of up to 1000 calls per second. Because the system is available 24 hours a day, any downtime should be avoided. Periodically, historical data is preserved to files. This project places a premium on cost savings. Which database solution would be the most cost-effective in this scenario? A. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table. B. Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACTIVE" or 'TERMINATED". In this way the SQL query is optimized by the use of the Index. C. Use RDS Multi-AZ with two tables, one for "ACTIVE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access. D. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive" attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.

D Q: Can a global secondary index key be defined on non-unique attributes? Yes. Unlike the primary key on a table, a GSI index does not require the indexed attributes to be unique. Q: Are GSI key attributes required in all items of a DynamoDB table? No. GSIs are sparse indexes. Unlike the requirement of having a primary key, an item in a DynamoDB table does not have to contain any of the GSI keys. If a GSI key has both hash and range elements, and a table item omits either of them, then that item will not be indexed by the corresponding GSI. In such cases, a GSI can be very useful in efficiently locating items that have an uncommon attribute. Reference: https://aws.amazon.com/dynamodb/faqs/

AWS Organizations enables a business to centrally manage hundreds of AWS accounts. Recently, the firm began allowing product teams to build and administer their own S3 access points under their own accounts. The S3 access points are only accessible inside VPCs; they are not accessible through the Internet. What is the MOST OPTIMAL method of enforcing this requirement? A. Set the S3 access point resource policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VPC. B. Create an SCP at the root level in the organization to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VPC. C. Use AWS CloudFormation StackSets to create a new IAM policy in each AWS account that allows the s3:CreateAccessPoint action only if the s3:AccessPointNetworkOrigin condition key evaluates to VPC. D. Set the S3 bucket policy to deny the s3:CreateAccessPoint action unless the s3:AccessPointNetworkOrigin condition key evaluates to VPC.

D Reference: https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

A business wants to migrate a web application to AWS. Due to the fact that the program keeps session information locally on each web server, auto scaling will be problematic. The application will be redesigned as part of the move to detach session data from the web servers. Low latency, scalability, and availability are required by the business. Which service will satisfy the needs for the most cost-effective storage of session information? A. Amazon ElastiCache with the Memcached engine B. Amazon S3 C. Amazon RDS MySQL D. Amazon ElastiCache with the Redis engine

D Reference: https://aws.amazon.com/caching/session-management/ https://aws.amazon.com/elasticache/redis-vs-memcached/

A business uses an on-demand Amazon EC2 C5 compute optimized instance to execute a memory-intensive analytics application. The application is regularly utilized, and demand for the application doubles during working hours. Currently, the program scales dependent on CPU utilization. When scaling in happens, a lifecycle hook is utilized since the instance must clean up the application state for four minutes before terminating. Due to user reports of poor performance during business hours, planned scaling measures were developed to allow for the installation of extra instances during business hours. The Solutions Architect has been tasked with the task of lowering the application's cost. Which option is the MOST cheapest? A. Use the existing launch configuration that uses C5 instances, and update the application AMI to include the Amazon CloudWatch agent. Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during working hours. B. Update the existing launch configuration to use R5 instances, and update the application AMI to include SSM Agent. Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and use Spot Instances with on-Demand instances to cover the increased demand during working hours. C. Use the existing launch configuration that uses C5 instances, and update the application AMI to include SSM Agent. Leave the Auto Scaling policies to scale based on CPU utilization. Use scheduled Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during working hours. D. Create a new launch configuration using R5 instances, and update the application AMI to include the Amazon CloudWatch agent. Change the Auto Scaling policies to scale based on memory utilization. Use Reserved Instances for the number of instances required after working hours, and use Standard Reserved Instances with On-Demand Instances to cover the increased demand during working hours.

D Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html

A bank is developing an online customer care portal that will enable clients to communicate with customer service representatives. In the event of a regional calamity, the portal is obliged to maintain a 15-minute RPO or RTO. Banking rules mandate that all customer service chat transcripts be retained for at least seven years on durable storage, that chat dialogues be encrypted in-flight, and that transcripts be encrypted at rest. Data at rest must be encrypted using a key that the Data Loss Prevention team owns, rotates, and revokes. Which design satisfies these criteria? A. The chat application logs each chat message into Amazon CloudWatch Logs. A scheduled AWS Lambda function invokes a CloudWatch Logs CreateExportTask every 5 minutes to export chat transcripts to Amazon S3. The S3 bucket is configured for cross-region replication to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the S3 bucket. B. The chat application logs each chat message into two different Amazon CloudWatch Logs groups in two different regions, with the same AWS KMS key applied. Both CloudWatch Logs groups are configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy with a KMS key specified. C. The chat application logs each chat message into Amazon CloudWatch Logs. A subscription filter on the CloudWatch Logs group feeds into an Amazon Kinesis Data Firehose which streams the chat messages into an Amazon S3 bucket in the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Kinesis Data Firehose. D. The chat application logs each chat message into Amazon CloudWatch Logs. The CloudWatch Logs group is configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy. Glacier cross-region replication mirrors chat archives to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Amazon Glacier vault.

D Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

A Solutions Architect wishes to restrict access to a new Amazon API Gateway endpoint to AWS users or roles with the appropriate permissions. The Solutions Architect need a complete picture of each request in order to assess its latency and design service maps. How can a Solutions Architect create the API Gateway's access control and request inspection policies? A. For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Enable the API caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray to trace and analyze user requests to API Gateway. B. For the API Gateway resource, set CORS to enabled and only return the companyגTM€s domain in Access-Control-Allow-Origin headers. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and analyze user requests to API Gateway. C. Create an AWS Lambda function as the custom authorizer, ask the API client to pass the key and secret when making the call, and then use Lambda to validate the key/secret pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API Gateway. D. Create a client certificate for API Gateway. Distribute the certificate to the AWS users and roles that need to access the endpoint. Enable the API caller to pass the client certificate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze user requests to API Gateway.

D Reference: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-cors.html

How long may a certificate ID in AWS IAM be? A. 1024 characters B. 512 characters C. 64 characters D. 128 characters

D The maximum length for a certificate ID is 128 characters. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html

A user used the wizard to build a VPC using the CIDR 20.0.0.0/16. The user has constructed a public subnet CIDR (20.0.0.0/24) and a VPN-only subnet CIDR (20.0.1.0/24) in order to connect to the user's data center through the VPN gateway (vgw-123456). CIDR 172.28.0.0/12 is assigned to the user's data center. Additionally, the user has configured a NAT instance (i-123456) to enable traffic from the VPN subnet to the internet. Which of the following alternatives is not a valid entry in this scenario's primary route table? A. Destination: 20.0.0.0/16 and Target: local B. Destination: 0.0.0.0/0 and Target: i-123456 C. Destination: 172.28.0.0/12 and Target: vgw-123456 D. Destination: 20.0.1.0/24 and Target: i-123456

D The user can create subnets as per the requirement within a VPC. If the user wants to connect VPC from his own data centre, he can setup a public and VPN only subnet which uses hardware VPN access to connect with his data centre. When the user has configured this setup with Wizard, it will create a virtual private gateway to route all traffic of the VPN subnet. If the user has setup a NAT instance to route all the internet requests, then all requests to the internet should be routed to it. All requests to the organization's DC will be routed to the VPN gateway. Here are the valid entries for the main route table in this scenario: Destination: 0.0.0.0/0 & Target: i-123456 (To route all internet traffic to the NAT Instance) Destination: 172.28.0.0/12 & Target: vgw-123456 (To route all the organization's data centre traffic to the VPN gateway) Destination: 20.0.0.0/16 & Target: local (To allow local routing in VPC) Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html

You're creating a smartphone application for picture sharing. All images will be stored in a single Amazon S3 bucket. Users will be able to post images immediately from their mobile device to Amazon S3, as well as view and download their own images straight from Amazon S3. You want to set security in such a way that it can manage possibly millions of users securely. When a new user registers on your photo-sharing mobile application, what should your server-side program do? A. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3. B. Create an IAM user. Assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3. C. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3. D. Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS Security Token Service "AssumeRole" function. Store these credentials in the mobile appגTM€s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app. E. Record the user's information in Amazon DynamoDB. When the user uses their mobile app, create temporary credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app's memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.

D We can use either RDS or DynamoDB, however in our given answers, IAM role is mentioned only with RDS, so I would go with Answer B. Question was explicitly focused on security, so IAM with RDS is the best choice.

Can many Load Balancers be configured under a single Auto Scaling group? A. No B. Yes, you can but only if it is configured with Amazon Redshift. C. Yes, you can provide the ELB is configured with Amazon AppStream. D. Yes

D Yes, you can configure more than one load balancer with an autoscaling group. Auto Scaling integrates with Elastic Load Balancing to enable you to attach one or more load balancers to an existing Auto Scaling group. After you attach the load balancer, it automatically registers the instances in the group and distributes incoming traffic across the instances. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html

On Amazon EC2 instances, a business is launching a new online application. Separate AWS accounts are used for development and production workloads. Only automated configuration tools are permitted to access the production account in accordance with the company's security rules. The security team of the organization want to be notified immediately if any manual access to the production AWS account or EC2 instances happens. Which combination of activities in the production environment should a solutions architect do to achieve these requirements? (Select three.) A. Turn on AWS CloudTtail logs in the applicationגTM€s primary AWS Region. Use Amazon Athena to query the logs for AwsConsoleSignin events. B. Configure Amazon Simple Email Service (Amazon SES) to send email to the security team when an alarm is activated. C. Deploy EC2 instances in an Auto Scaling group. Configure the launch template to deploy instances without key pairs. Configure Amazon CloudWatch Logs to capture system access logs. Create an Amazon CloudWatch alarm that is based on the logs to detect when a user logs in to an EC2 instance. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to send a message to the security team when an alarm is activated. E. Turn on AWS CloudTrail logs for all AWS Regions. Configure Amazon CloudWatch alarms to provide an alert when an AwsConsoleSignin event is detected. F. Deploy EC2 instances in an Auto Scaling group. Configure the launch template to delete the key pair after launch. Configure Amazon CloudWatch Logs for the system access logs. Create an Amazon CloudWatch dashboard to show user logins over time.

DEF

To authenticate users' access to the AWS environment, a solutions architect created a SAML 2.0 federated identity solution in conjunction with their company's on-premises identity provider (IdP). Access to the AWS environment is given after the solutions architect verifies credentials using the federated identity web portal. However, when test users try to enter the AWS environment using the federated identity web portal, they are unable to do so. Which elements should the solutions architect verify to guarantee correct configuration of identity federation? (Select three.) A. The IAM userגTM€s permissions policy has allowed the use of SAML federation for that user. B. The IAM roles created for the federated usersג TM€or federated groupsג TM€trust policy have set the SAML provider as the principal. C. Test users are not in the AWSFederatedUsers group in the companyגTM€s IdR. D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML assertion from IdR. E. The on-premises IdPגTM€s DNS hostname is reachable from the AWS environment VPCs. F. The companyגTM€s IdP defines SAML assertions that properly map users or groups in the company to IAM roles with appropriate permissions.

DEF

Currently, a three-tier e-commerce web application is installed on-premises and will be transferred to AWS for increased scalability and flexibility. At the moment, the web server uses a network distributed file system to transfer read-only data. The application server layer makes use of an IP multicast-based clustering technique for discovery and shared session state. The database layer scales by using shared-storage clustering and many read slaves. Weekly off-site tape backups are performed on all servers and the distributed file system directory. Which AWS storage and database architecture best fulfills your application's requirements? A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots. B. Web servers: store read-only data in an EC2 NFS server; mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots. C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots. D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

MySQL C Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. Benefits - Enhanced Durability - Multi-AZ deployments for the - , Oracle - , and PostgreSQL - engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server - engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone. If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available. Amazon Aurora - employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically. Increased Availability - You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ - for details). The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for MultiAZ deployments. On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically. No Administrative Intervention - DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions. Failover conditions - Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: ✑ Loss of availability in primary Availability Zone ✑ Loss of network connectivity to primary ✑ Compute unit failure on primary ✑ Storage failure on primary Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.

A business was under pressure to transition its on-premises setup to AWS within a short period of time. It imported and exported Microsoft SQL Servers and Microsoft Windows Servers and rebuilt other cloud-native apps utilizing the virtual machine import/export service. The team utilized both Amazon EC2 and Amazon RDS to develop databases. Each team within the organization was responsible for transferring their apps, and they established separate accounts to ensure resource isolation. The firm did not have much time to evaluate expenses, but now seeks recommendations on how to reduce its AWS spending. Which cost-cutting measures should a Solutions Architect take? A. Enable AWS Business Support and review AWS Trusted AdvisorגTM€s cost checks. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysis. Create a master account under Organizations and have teams join for consolidated billing. B. Enable Cost Explorer and AWS Business Support. Reserve Amazon EC2 and Amazon RDS DB instances. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive cost-savings suggestions. Create a master account under Organizations and have teams join for consolidated billing. C. Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms. Reserve instances based on AWS Simple Monthly Calculator suggestions. Have an AWS Well-Architected framework review and apply recommendations. Create a master account under Organizations and have teams join for consolidated billing. D. Create a budget and monitor for costs exceeding the budget. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarms. Have each team upload their bill to an Amazon S3 bucket for analysis of team spending. Use Spot Instances on nightly batch processing jobs.

A

What is the indication that an object has been successfully stored in Amazon S3? A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful. B. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted. C. A success code is inserted into the S3 object metadata. D. Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.

A

Choose the appropriate group of alternatives. These are the default security group's basic configuration settings: A. Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other B. Allow all inbound traffic, Allow no outbound traffic and Allow instances associated with this security group to talk to each other C. Allow no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other D. Allow all inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other

A A default security group is named default, and it has an ID assigned by AWS. The following are the initial settings for each default security group: Allow inbound traffic only from other instances associated with the default security group Allow all outbound traffic from the instance The default security group specifies itself as a source security group in its inbound rules. This is what allows instances associated with the default security group to communicate with other instances associated with the default security group. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#default-%20security-group

AWS Direct Connect connects your internal network to an AWS Direct Connect facility through which of the following Ethernet standards? A. Single mode fiber-optic cable B. Multi-mode fiber-optic cable C. Shielded balanced copper cable D. Twisted pair cable

A AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet single mode fiber-optic cable. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

You've deployed a sizable quantity of network gear on AWS and now need to consider monitoring it all. You determine that CloudWatch is the greatest match for your requirements, but you are unclear about CloudWatch's price structure and limits. Which of the following claims about CloudWatch's limitations is TRUE? A. You get 10 CloudWatch metrics, 10 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free. B. You get 100 CloudWatch metrics, 100 alarms, 10,000,000 API requests, and 10,000 Amazon SNS email notifications per customer per month for free. C. You get 10 CloudWatch metrics, 10 alarms, 1,000 API requests, and 100 Amazon SNS email notifications per customer per month for free. D. You get 100 CloudWatch metrics, 100 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free.

A Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch has the following limits: You get 10 CloudWatch metrics, 10 alarms, 1,000,000 API requests, and 1,000 Amazon SNS email notifications per customer per month for free. You can assign up to 10 dimensions per metric. You can create up to 5000 alarms per AWS account. Metric data is kept for 2 weeks. The size of a PutMetricData request is limited to 8KB for HTTP GET requests and 40KB for HTTP POST requests. You can include a maximum of 20 MetricDatum items in one PutMetricData request. A MetricDatum can contain a single value or a StatisticSet representing many values. Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_limits.html

What is a quiet push notification in Amazon Cognito? A. It is a push message that is received by your application on a user's device that will not be seen by the user. B. It is a push message that is received by your application on a user's device that will return the user's geolocation. C. It is a push message that is received by your application on a user's device that will not be heard by the user. D. It is a push message that is received by your application on a user's device that will return the user's authentication credentials

A Amazon Cognito uses the Amazon Simple Notification Service (SNS) to send silent push notifications to devices. A silent push notification is a push message that is received by your application on a user's device that will not be seen by the user. Reference: http://aws.amazon.com/cognito/faqs

AWS is used by an enterprise to run a scalable online application. To scale the application, the business has implemented ELB and Auto Scaling. Which of the following assertions is not necessary when an application intends to run a web application on VPC? A. The ELB and all the instances should be in the same subnet. B. Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC. C. The internet facing ELB should have a route table associated with the internet gateway. D. The internet facing ELB should be only in a public subnet.

A Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For the internet facing ELB it is required that the ELB should be in a public subnet. After the user creates the public subnet, he should ensure to associate the route table of the public subnet with the internet gateway to enable the load balancer in the subnet to connect with the internet. The ELB and instances can be in a separate subnet. However, to allow communication between the instance and the ELB the user must configure the security group rules and network ACLs to allow traffic to be routed between the subnets in his VPC. Reference: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/CreateVPCForELB.html

Which of the following statements concerning delegating authorization to perform API calls is NOT accurate in the context of IAM roles for Amazon EC2? A. You cannot create an IAM role. B. You can have the application retrieve a set of temporary credentials and use them. C. You can specify the role when you launch your instances. D. You can define which accounts or AWS services can assume the role.

A Amazon designed IAM roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows: Create an IAM role. Define which accounts or AWS services can assume the role. Define which API actions and resources the application can use after assuming the role. Specify the role when you launch your instances. Have the application retrieve a set of temporary credentials and use them. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

With the AWS Simple Notification Service, a user has enabled thorough CloudWatch monitoring. Which of the following statements aids the user in comprehending thorough monitoring? A. SNS cannot provide data every minute B. SNS will send data every minute after configuration C. There is no need to enable since SNS provides data every minute D. AWS CloudWatch does not support monitoring for SNS

A CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. The AWS SNS service sends data every 5 minutes. Thus, it supports only the basic monitoring. The user cannot enable detailed monitoring with SNS. Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/supported_services.html

A user is setting PIOPS for MySQL RDS. What should be the user's minimum provisioned PIOPS? A. 1000 B. 200 C. 2000 D. 500

A If a user is trying to enable PIOPS with MySQL RDS, the minimum size of storage should be 100 GB and the minimum PIOPS should be 1000. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.html

Each resource declaration in an AWS CloudFormation template comprises the following: A. a logical ID, a resource type, and resource properties B. a variable resource name and resource attributes C. an IP address and resource entities D. a physical ID, a resource file, and resource data

A In AWS CloudFormation, each resource declaration includes three parts: a logical ID that is unique within the template, a resource type, and resource properties. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-resources.html

How can you ensure that your AWS CloudFormation template is operationally sound? A. To check the operational validity, you need to attempt to create the stack. B. There is no way to check the operational validity of your AWS CloudFormation template. C. To check the operational validity, you need a sandbox or test area for AWS CloudFormation stacks. D. To check the operational validity, you need to use the aws cloudformation validate-template command.

A In AWS CloudFormation, to check the operational validity, you need to attempt to create the stack. There is no sandbox or test area for AWS CloudFormation stacks, so you are charged for the resources you create during testing. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-validate-template.html

"The data is ultimately consistent" in DynamoDB indicates that__________. A. a read request immediately after a write operation might not show the latest change. B. a read request immediately after a write operation shows the latest change. C. a write request immediately after a read operation might cause data loss. D. a read request immediately after a write operation might cause data loss.

A In DynamoDB, it takes time for the update to propagate to all copies. The data is eventually consistent, meaning that a read request immediately after a write operation might not show the latest change. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html

Which status in AWS CloudFormation denotes a failure state? A. ROLLBACK_IN_PROGRESS B. DELETE_IN_PROGRESS C. UPDATE_COMPLETE_CLEANUP_IN_PROGRESS D. REVIEW_IN_PROGRESS

A ROLLBACK_IN_PROGRESS means an ongoing removal of one or more stacks after a failed stack creation or after an explicitly canceled stack creation. DELETE_IN_PROGRESS means an ongoing removal of one or more stacks. REVIEW_IN_PROGRESS means an ongoing creation of one or more stacks with an expected StackId but without any templates or resources. UPDATE_COMPLETE_CLEANUP_IN_PROGRESS means an ongoing removal of old resources for one or more stacks after a successful stack update. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-describing-stacks.html

An company intends to employ NoSQL DB to meet its scalable data storage requirements. The business wishes to safely host an application on an AWS VPC. What course of action should the company take? A. The organization should setup their own NoSQL cluster on the AWS instance and configure route tables and subnets. B. The organization should only use a DynamoDB because by default it is always a part of the default subnet provided by AWS. C. The organization should use a DynamoDB while creating a table within the public subnet. D. The organization should use a DynamoDB while creating a table within a private subnet.

A The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Currently VPC does not support DynamoDB. Thus, if the user wants to implement VPC, he has to setup his own NoSQL DB within the VPC. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html

Amazon ElastiCache supports which of the following cache engines? A. Amazon ElastiCache supports Memcached and Redis. B. Amazon ElastiCache supports Redis and WinCache. C. Amazon ElastiCache supports Memcached and Hazelcast. D. Amazon ElastiCache supports Memcached only.

A The cache engines supported by Amazon ElastiCache are Memcached and Redis. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html

By adding a second VPN connection, you may create redundant VPN connections and customer gateways on your network. Which of the following will guarantee that this works properly? A. The customer gateway IP address for the second VPN connection must be publicly accessible. B. The virtual gateway IP address for the second VPN connection must be publicly accessible. C. The customer gateway IP address for the second VPN connection must use dynamic routes. D. The customer gateway IP address for the second VPN connection must be privately accessible and be the same public IP address that you are using for the first VPN connection.

A To establish redundant VPN connections and customer gateways on your network, you would need to set up a second VPN connection. However, you must ensure that the customer gateway IP address for the second VPN connection is publicly accessible. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

Which stack state in AWS CloudFormation rejects UpdateStack calls? A. UPDATE_ROLLBACK_FAILED B. UPDATE_ROLLBACK_COMPLETE C. UPDATE_COMPLETE D. CREATE_COMPLETE

A When a stack is in the UPDATE_ROLLBACK_FAILED state, you can continue rolling it back to return it to a working state (to UPDATE_ROLLBACK_COMPLETE). You cannot update a stack that is in the UPDATE_ROLLBACK_FAILED state. However, if you can continue to roll it back, you can return the stack to its original settings and try to update it again. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html

When creating an AMI or starting a new instance on Amazon Elastic Compute Cloud, you may define storage volumes in addition to the root device volume using______. A. block device mapping B. object mapping C. batch storage mapping D. datacenter mapping

A When creating an AMI or launching a new instance, you can assign more than one block storage device to it. This device will be automatically set ready for you through an automated process known as block device mapping. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html

Using the VPC wizard, a user constructed a VPC using the CIDR 20.0.0.0/16. To connect to the user's data center, the user has built public and VPN-only subnets, as well as hardware VPN access. The user has not yet started any instances or edited or removed any configurations. He wants to erase this virtual private cloud from the console. Is it possible for the user to erase the VPC from the console? A. Yes, the user can detach the virtual private gateway and then use the VPC console to delete the VPC. B. No, since the NAT instance is running, the user cannot delete the VPC. C. Yes, the user can use the CLI to delete the VPC that will detach the virtual private gateway automatically. D. No, the VPC console needs to be accessed using an administrator account to delete the VPC.

A You can delete your VPC at any time (for example, if you decide it's too small). However, you must terminate all instances in the VPC first. When you delete a VPC using the VPC console, Amazon deletes all its components, such as subnets, security groups, network ACLs, route tables, Internet gateways, VPC peering connections, and DHCP options. If you have a VPN connection, you don't have to delete it or the other components related to the VPN (such as the customer gateway and virtual private gateway). Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html#VPC_Deleting

Is it necessary for your application to reside in the same VPC as the CloudHSM instance when using AWS Cloud Hardware Security Module(HSM)? A. No, but the server or instance on which your application and the HSM client is running must have network (IP) reachability to the HSM. B. Yes, always C. No, but they must reside in the same Availability Zone. D. No, but it should reside in same Availability Zone as the DB instance.

A Your application does not need to reside in the same VPC as the CloudHSM instance. However, the server or instance on which your application and the HSM client is running must have network (IP) reachability to the HSM. You can establish network connectivity in a variety of ways, including operating your application in the same VPC, with VPC peering, with a VPN connection, or with Direct Connect. Reference: https://aws.amazon.com/cloudhsm/faqs/

A business use Amazon Simple Storage Service to store data (S3). Data at rest must be encrypted according to the company's security policy. Which of the following strategies is capable of doing this? (Select three.) A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. B. Use Amazon S3 server-side encryption with customer-provided keys. C. Use Amazon S3 server-side encryption with EC2 key pair. D. Use Amazon S3 bucket policies to restrict access to the data at rest. E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key. F. Use SSL to encrypt the data while in transit to Amazon S3.

ABE Reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

A business has containerized and deployed application services on various Amazon EC2 instances with public IP addresses. On the EC2 instances, an Apache Kafka cluster has been installed. Amazon RDS for PostgreSQL has been used to migrate a PostgreSQL database. The firm anticipates a big rise in order volume on its platform after the launching of a new version of its main product. Which improvements to the present design will result in lower operating costs and better support for the product's release? A. Create an EC2 Auto Scaling group behind an Application Load Balancer. Create additional read replicas for the DB instance. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3. B. Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis data streams and configure the application services to use the data streams. Store and serve static content directly from Amazon S3. C. Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution. D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.

B

A business wishes to transition its on-premises data analytics infrastructure to AWS. Two simple Node.js apps comprise the environment. One of the apps gathers and stores sensor data in a MySQL database. The other program generates reports from the data. When the aggregation tasks are executed, some of the load jobs fail to execute properly. The company's data loading problem must be resolved. Additionally, the firm requires that the move take place without causing any disruptions or modifications to the company's clients. What actions should a solutions architect take to ensure that these criteria are met? A. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB. B. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS. C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS. D. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the Kinesis data stream.

B

Due to authorization difficulties on an Amazon S3 bucket, a corporation faced a compromise of very private personal information. To further limit access, the Information Security team has tightened the bucket policy. Additionally, the following needs must be satisfied in order to be better prepared for future attacks: ✑ Determine which external IP addresses are attempting to access the bucket objects. ✑ Receive notifications when the bucket's security policy is modified. ✑ Automatically rectify policy changes. Which tactics should be used by the Solutions Architect? A. Use Amazon CloudWatch Logs with CloudWatch filters to identify remote IP addresses. Use CloudWatch Events rules with AWS Lambda to automatically remediate S3 bucket policy changes. Use Amazon SES with CloudWatch Events rules for alerts. B. Use Amazon Athena with S3 access logs to identify remote IP addresses. Use AWS Config rules with AWS Systems Manager Automation to automatically remediate S3 bucket policy changes. Use Amazon SNS with AWS Config rules for alerts. C. Use S3 access logs with Amazon Elasticsearch Service and Kibana to identify remote IP addresses. Use an Amazon Inspector assessment template to automatically remediate S3 bucket policy changes. Use Amazon SNS for alerts. D. Use Amazon Macie with an S3 bucket to identify access patterns and remote IP addresses. Use AWS Lambda with Macie to automatically remediate S3 bucket policy changes. Use Macie automatic alerting capabilities for alerts.

B

On-premises storage on a Windows file server is used by a business. Daily, the firm generates 5 GB of fresh data. The firm relocated a portion of its Windows-based workload to AWS and requires data to be accessible through a cloud file system. Between the on-premises network and AWS, the organization has already built an AWS Direct Connect link. Which data transfer approach should a business employ? A. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS) D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS)

B

What does AWS mean by elasticity? A. The ability to scale computing resources up easily, with minimal friction and down with latency. B. The ability to scale computing resources up and down easily, with minimal friction. C. The ability to provision cloud computing resources in expectation of future demand. D. The ability to recover from business continuity events with minimal friction.

B

In AWS CloudFormation, what is a circular dependency? A. When Nested Stacks depend on each other. B. When Resources form a Depend On loop. C. When a Template references an earlier version of itself. D. When a Template references a region, which references the original Template.

B To resolve a dependency error, add a Depends On attribute to resources that depend on other resources in your template. In some cases, you must explicitly declare dependencies so that AWS CloudFormation can create or delete resources in the correct order. For example, if you create an Elastic IP and a VPC with an Internet gateway in the same stack, the Elastic IP must depend on the Internet gateway attachment. For additional information, see Depends On Attribute. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-dependency-error

MapMySite is configuring an AWS virtual private cloud (VPC) for a web application. The firm has chosen to utilize an AWS RDS instance rather than its own database instance to meet its high availability and disaster recovery needs. Additionally, the business wishes to safeguard RDS access. How should an RDS-enabled web application be configured? A. Create a VPC with one public and one private subnet. Launch an application instance in the public subnet while RDS is launched in the private subnet. B. Setup a public and two private subnets in different AZs within a VPC and create a subnet group. Launch RDS with that subnet group. C. Create a network interface and attach two subnets to it. Attach that network interface with RDS while launching a DB instance. D. Create two separate VPCs and launch a Web app in one VPC and RDS in a separate VPC and connect them with VPC peering.

B A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources, such as RDS into a virtual network that the user has defined. Subnets are segments of a VPC's IP address range that the user can designate to a group of VPC resources based on the security and operational needs. A DB subnet group is a collection of subnets (generally private) that a user can create in a VPC and assign to the RDS DB instances. A DB subnet group allows the user to specify a particular VPC when creating the DB instances. Each DB subnet group should have subnets in at least two Availability Zones in a given region. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html

As one of the AWS resource categories, AWS ________supports__________ environments. A. Elastic Beanstalk; Elastic Beanstalk application B. CloudFormation; Elastic Beanstalk application C. Elastic Beanstalk ; CloudFormation application D. CloudFormation; CloudFormation application

B AWS CloudFormation and AWS Elastic Beanstalk services are designed to complement each other. AWS CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types. Reference: http://aws.amazon.com/cloudformation/faqs/

You're working on a new mobile application and contemplating using AWS to store user preferences. 2w This would give a more consistent cross-device experience for consumers who access the application through numerous mobile devices. Each user's preference data is projected to be 50KB in size. Additionally, the program is planned to be used on a daily basis by 5 million subscribers. How would you build a system that is cost-effective, highly available, scalable, and secure? A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access. C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials. D. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the userג TM€S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.

B Here are some of the things that you can build using fine-grained access control: A mobile app that displays information for nearby airports, based on the userגTM€s location. The app can access and display attributes such airline names, arrival times, and flight numbers. However, it cannot access or display pilot names or passenger counts. A mobile game which stores high scores for all users in a single table. Each user can update their own scores, but has no access to the other ones. Reference: https://aws.amazon.com/blogs/aws/fine-grained-access-control-for-amazon-dynamodb/

The user has used an EBS optimized instance to provision the PIOPS volume. In general, which I/O chunk should AWS utilize to determine the user's bandwidth experience? A. 128 KB B. 256 KB C. 64 KB D. 32 KB

B IOPS are input/output operations per second. Amazon EBS measures each I/O operation per second (that is 256 KB or smaller) as one IOPS. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html

Which of the following is not a suitable tag key/value combination when adding a tag to an instance? A. Key : "aws" Value:"aws" B. Key: "aws:name" Value: "instanceAnswer: Aws" C. Key: "Name :aws" Value: "instanceAnswer: Aws" D. Key : "nameAnswer: Aws" Value:"aws:instance"

B In Amazon Web Services, to help manage EC2 instances as well their usage in a better way, the user can tag the instances. The tags are metadata assigned by the user which consists of a key and value. The tag key cannot have a prefix as "aws:", although it can have only "aws". Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

Which of the following parameters does Amazon not charge you for in relation to DynamoDB? A. Storage cost B. I/O usage within the same Region C. Cost per provisioned read units D. Cost per provisioned write units

B In DynamoDB, you will be charged for the storage and the throughput you use rather than for the I/O which has been used. Reference: http://aws.amazon.com/dynamodb/pricing/

A user is building a Volume with Provisioned IOPS. What is the maximum ratio of Provisioned IOPS to volume size that the user should configure? A. 30 to 1 B. 50 to 1 C. 10 to 1 D. 20 to 1

B Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. An io1 volume can range in size from 4 GiB to 16 TiB and you can provision 100 up to 20,000 IOPS per volume. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. For example, a 100 GiB volume can be provisioned with up to 5,000 IOPS. Any volume 400 GiB in size or greater allows provisioning up to the 20,000 IOPS maximum. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

A business must operate software that is licensed to be executed on a single physical host for the length of its usage. The software package will be utilized for a period of 90 days. Every 30 days, the organization demands that all instances be patched and restarted. How may AWS be used to meet these requirements? A. Run a dedicated instance with auto-placement disabled. B. Run the instance on a dedicated host with Host Affinity set to Host. C. Run an On-Demand Instance with a Reserved Instance to ensure consistent placement. D. Run the instance on a licensed host with termination set for 90 days.

B Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-dedicated-hosts-work.html

On AWS, a business is now operating a production workload that is very I/O heavy. Its workload is distributed over a single tier and is comprised of ten c4.8xlarge instances, each with a 2 TB gp2 volume. Recently, both the quantity of processing tasks and latency have grown. The crew is aware that they are limited by the IOPS. They need to boost the IOPS by 3,000 for each instance in order for the application to run effectively. Which of the following designs will most effectively achieve the performance objective? A. Change the type of Amazon EBS volume from gp2 to io1 and set provisioned IOPS to 9,000. B. Increase the size of the gp2 volumes in each instance to 3 TB. C. Create a new Amazon EFS file system and move all the data to this new file system. Mount this file system to all 10 instances. D. Create a new Amazon S3 bucket and move all the data to this new bucket. Allow each instance to access this S3 bucket and use it for storage.

B Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

A business already has an on-premises three-tier web application. Because the material is updated multiple times a day from numerous sources, the Linux web servers provide it through a centralized file share on a NAS server. The current infrastructure is inefficient, and the organization want to migrate to AWS in order to have the capacity to dynamically scale resources in response to demand. AWS Direct Link is used to connect on-premises and AWS resources. How can the organization transition its online infrastructure to AWS without causing a delay in the process of content refreshment? A. Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AWS. Share an Amazon EBS volume among all instances for the content. Schedule a periodic synchronization of this volume and the NAS server. B. Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and replicate content to AWS. On the AWS side, mount the same Storage Gateway bucket to each web server Amazon EC2 instance to serve the content. C. Expose an Amazon EFS share to on-premises users to serve as the NAS serve. Mount the same EFS share to the web server Amazon EC2 instances to serve the content. D. Create web server Amazon EC2 instances on AWS in an Auto Scaling group. Configure a nightly process where the web server instances are updated from the NAS server.

B Reference: https://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStartedAccessFileShare.html

An business wishes to expand its data center by connecting it through the VPN gateway to the AWS VPC. The organization is establishing a VPN connection that is dynamically routed. Which of the following responses is not necessary for this configuration to be set up? A. The type of customer gateway, such as Cisco ASA, Juniper J-Series, Juniper SSG, Yamaha. B. Elastic IP ranges that the organization wants to advertise over the VPN connection to the VPC. C. Internet-routable IP address (static) of the customer gateway's external interface. D. Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the customer gateway.

B The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. The organization wants to extend their network into the cloud and also directly access the internet from their AWS VPC. Thus, the organization should setup a Virtual Private Cloud (VPC) with a public subnet and a private subnet, and a virtual private gateway to enable communication with their data center network over an IPsec VPN tunnel. To setup this configuration the organization needs to use the Amazon VPC with a VPN connection. The organization network administrator must designate a physical appliance as a customer gateway and configure it. The organization would need the below mentioned information to setup this configuration: The type of customer gateway, such as Cisco ASA, Juniper J-Series, Juniper SSG, Yamaha Internet-routable IP address (static) of the customer gateway's external interface Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the customer gateway, if the organization is creating a dynamically routed VPN connection. Internal network IP ranges that the user wants to advertise over the VPN connection to the VPC. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

The Condition element is _________ in the context of rules and permissions in AWS IAM. A. crucial while writing the IAM policies B. an optional element C. always set to null D. a mandatory element

B The Condition element (or Condition block) lets you specify conditions for when a policy is in effect. The Condition element is optional. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

Which of the following settings should the user adjust to manually scale out AWS resources using AutoScaling? A. Current capacity B. Desired capacity C. Preferred capacity D. Maximum capacity

B The Manual Scaling as part of Auto Scaling allows the user to change the capacity of Auto Scaling group. The user can add / remove EC2 instances on the fly. To execute manual scaling, the user should modify the desired capacity. AutoScaling will adjust instances as per the requirements. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html

Which of the following assertions about Amazon ElastiCache is correct? A. When you launch an ElastiCache cluster into an Amazon VPC private subnet, every cache node is assigned a public IP address within that subnet. B. You cannot use ElastiCache in a VPC that is configured for dedicated instance tenancy. C. If your AWS account supports only the EC2-VPC platform, ElastiCache will never launch your cluster in a VPC. D. ElastiCache is not fully integrated with Amazon Virtual Private Cloud (VPC).

B The VPC must allow non-dedicated EC2 instances. You cannot use ElastiCache in a VPC that is configured for dedicated instance tenancy. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AmazonVPC.EC.html

Determine a true assertion regarding user passwords in the context of AWS IAM (login profiles). A. They must contain Unicode characters. B. They can contain any Basic Latin (ASCII) characters. C. They must begin and end with a forward slash (/). D. They cannot contain Basic Latin (ASCII) characters.

B The user passwords (login profiles) of IAM users can contain any Basic Latin (ASCII)characters. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html

What should you use for incoming traffic on an elastic network interface (ENI) to guarantee failover capabilities? A. A Route53 A record B. A secondary private IP C. A secondary public IP D. A secondary ENI

B To ensure failover capabilities on an elastic network interface (ENI), consider using a secondary private IP for incoming traffic and if a failure occurs, you can move the interface and/or secondary private IP address to a standby instance. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

You may provide up to 3TB of storage and 30,000 IOPS per database instance with Amazon RDS for PostgreSQL. PostgreSQL can achieve over 25,000 IOPS with a workload composed of 50% writes and 50% reads running on a cr1.8xlarge instance. However, by exceeding this restriction, you may be able to do the following: A. higher latency and lower throughput. B. lower latency and higher throughput. C. higher throughput only. D. higher latency only.

B You can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on a cr1.8xlarge instance, you can realize over 25,000 IOPS for PostgreSQL. However, by provisioning more than this limit, you may be able to achieve lower latency and higher throughput. Your actual realized IOPS may vary from the amount you provisioned based on your database workload, instance type, and database engine choice. Reference: https://aws.amazon.com/rds/postgresql/

All Amazon EC2 images must be inspected for vulnerabilities and pass a CVE assessment as part of a company's security compliance obligations. A solutions architect is working on a technique for creating developer-friendly AMIs that are security-approved. Before developers may utilize any new AMIs, they should undergo an automated evaluation procedure and be designated as acceptable. To guarantee compliance, authorized photos must be scanned every 30 days. Which measures should the solutions architect do in combination to achieve these criteria while adhering to best practices? (Select two.) A. Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned. B. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days. C. Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned. D. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances, and use AWS Systems Manager Automation documents for remediation. E. Use AWS CloudTrail to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.

BC

From a web application hosted on AWS in the eu-west-1 Region, a weather service offers high-resolution weather maps. Weather maps are often updated and are kept in Amazon S3 with static HTML information. Amazon CloudFront serves as the front end for the online application. The firm has expanded to service consumers in the us-east-1 Region, and these new users have reported experiencing intermittent slowness while viewing their individual weather maps. Which combination of procedures will address the performance concerns with us-east-1? (Select two.) A. Configure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Configure endpoint groups for TCP ports 80 and 443 in us-east-1. B. Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1. C. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1. D. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1. E. Configure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distribution. Use Lambda@Edge to modify requests from North America to use the new origin.

BC

An Amazon Virtual Private Cloud (VPC) hosts a corporate online application that is linked to the corporate data center through an IPSec VPN. Authentication against the on-premises LDAP server is required. Following authentication, each logged-in user has access to just their own Amazon Simple Storage Service (S3) keyspace. Which two techniques are most likely to accomplish these goals? (Select two.) A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket. B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket. E. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.

BC Imagine that in your organization, you want to provide a way for users to copy data from their computers to a backup folder. You build an application that users can run on their computers. On the back end, the application reads and writes objects in an S3 bucket. Users donגTM€t have direct access to AWS. Instead, the application communicates with an identity provider (IdP) to authenticate the user. The IdP gets the user information from your organizationגTM€s identity store (such as an LDAP directory) and then generates a SAML assertion that includes authentication and authorization information about that user. The application then uses that assertion to make a call to the AssumeRoleWithSAML API to get temporary security credentials. The app can then use those credentials to access a folder in the S3 bucket thatגTM€s specific to the user. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

A fitness monitoring firm services consumers worldwide, with the majority of its revenue coming from North America and Asia. The corporation must create an infrastructure that meets the following criteria for its read-intensive user authorization application: ✑ Be robust to application-related issues in any Region. ✑ In a single Region, write to a database. ✑ Read from a variety of regions. ✑ In each Region, provide resilience across application layers. ✑ Sustain the application's relational database semantics. Which actions should a solutions architect do in combination? (Select two.) A. Use an Amazon Route 53 geoproximity routing policy combined with a multivalue answer routing policy. B. Deploy web, application, and MySQL database servers to Amazon EC2 instance in each Region. Set up the application so that reads and writes are local to the Region. Create snapshots of the web, application, and database servers and store the snapshots in an Amazon S3 bucket in both Regions. Set up cross- Region replication for the database layer. C. Use an Amazon Route 53 geolocation routing policy combined with a failover routing policy. D. Set up web, application, and Amazon RDS for MySQL instances in each Region. Set up the application so that reads are local and writes are partitioned based on the user. Set up a Multi-AZ failover for the web, application, and database servers. Set up cross-Region replication for the database layer. E. Set up active-active web and application servers in each Region. Deploy an Amazon Aurora global database with clusters in each Region. Set up the application to use the in-Region Aurora database endpoints. Create snapshots of the web application servers and store them in an Amazon S3 bucket in both Regions.

BD

A significant financial institution is utilizing AWS CloudFormation to deliver apps to the AWS Cloud using Amazon EC2 and Amazon RDS instances. CloudFormation's stack policy is as follows: { "Statement" : [ { "Effect" : "Allow", "Action" : ["Update:*"], "Principal" : "*", "Resource" : "*" } ] } The organization aims to guarantee that developers do not lose data while changing the CloudFormation stack by mistakenly deleting or replacing RDS instances. Additionally, developers must be allowed to alter or delete EC2 instances as required. What changes should be made to the company's stack policy to comply with these requirements? A. Modify the statement to specify ג€Effectג€: ג€Denyג€, ג€Actionג€:]ג€Update:*ג [€for all logical RDS resources. B. Modify the statement to specify ג€Effectג€: ג€Denyג€, ג€Actionג€:]ג€Update:Deleteג [€for all logical RDS resources. C. Add a second statement that specifies ג€Effectג€: ג€Denyג€, ג€Actionג€:]ג€Update:Deleteג€, ג€Update:Replaceג [€for all logical RDS resources. D. Add a second statement that specifies ג€Effectג€: ג€Denyג€, ג€Actionג€:]ג€Update:*ג [€for all logical RDS resources.

C

By shifting to AWS, a corporation wishes to control the expenses associated with a set of twenty apps that are little utilized but remain mission-critical. The apps are written in Java and Node.js and are distributed over many instance clusters. The organization wishes to reduce expenses while increasing standardization via the use of a single deployment approach. The majority of the programs are used as part of month-end processing procedures with a limited number of concurrent users, although they are also used at other times. The average program consumes less than 1 GB of memory, while some apps use up to 2.5 GB at peak activity. The group's most critical application is a Java-based billing report that often accesses several data sources and runs for many hours. Which approach is the MOST cost-effective? A. Deploy a separate AWS Lambda function for each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs. B. Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch. C. Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms. D. Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.

C

On Amazon EC2 instances, a business is operating an Apache Hadoop cluster. The Hadoop cluster holds around 100 TB of data for weekly operating reports and allows data scientists to access the cluster on an as-needed basis. The company's expense and operational complexity associated with storing and providing this data must be reduced. Which solution satisfies these needs the MOST cheaply? A. Move the Hadoop cluster from EC2 instances to Amazon EMR. Allow data access patterns to remain the same. B. Write a script that resizes the EC2 instances to a smaller instance type during downtime and resizes the instances to a larger instance type before the reports are created. C. Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in Amazon S3. D. Migrate the data to Amazon DynamoDB and modify the reports to fetch data from DynamoDB. Allow the data scientists to access the data directly in DynamoDB.

C

Polling an Amazon SQS queue and updating entries in an Amazon DynamoDB database is accomplished using a fleet of Amazon ECS instances. The table is not being updated, and the SQS queue is rapidly growing. When trying to update the table, Amazon CloudWatch Logs consistently display 400 failures. The supplied write capacity units are set adequately, and there is no throttling. Which of the following is the MOST LIKELY cause of the failure? A. The ECS service was deleted. B. The ECS configuration does not contain an Auto Scaling group. C. The ECS instance task execution IAM role was modified. D. The ECS task role was modified.

C

A user has used the VPC wizard to construct a VPC with public and private subnets. Which of the following claims about this circumstance is true? A. The user has to manually create a NAT instance B. The Amazon VPC will automatically create a NAT instance with the micro size only C. VPC updates the main route table used with the private subnet, and creates a custom route table with a public subnet D. VPC updates the main route table used with a public subnet, and creates a custom route table with a private subnet

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public subnet, the instances in the public subnet can receive inbound traffic directly from the internet, whereas the instances in the private subnet cannot. If these subnets are created with Wizard, AWS will create a NAT instance of a smaller or higher size, respectively. The VPC has an implied router and the VPC wizard updates the main route table used with the private subnet, creates a custom route table and associates it with the public subnet. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

An enterprise has configured RDS using a virtual private cloud (VPC). The company desires internet access to RDS. Which of the setups listed below is not necessary in this scenario? A. The organization must enable the parameter in the console which makes the RDS instance publicly accessible. B. The organization must allow access from the internet in the RDS VPC security group, C. The organization must setup RDS with the subnet group which has an external IP. D. The organization must enable the VPC attributes DNS hostnames and DNS resolution.

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources, such as RDS into a virtual network that the user has defined. Subnets are segments of a VPC's IP address range that the user can designate to a group of VPC resources based on security and operational needs. A DB subnet group is a collection of subnets (generally private) that the user can create in a VPC and which the user assigns to the RDS DB instances. A DB subnet group allows the user to specify a particular VPC when creating DB instances. If the RDS instance is required to be accessible from the internet: The organization must setup that the RDS instance is enabled with the VPC attributes, DNS hostnames and DNS resolution. The organization must enable the parameter in the console which makes the RDS instance publicly accessible. The organization must allow access from the internet in the RDS VPC security group. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html

A security audit is being conducted on a company. The auditor is requesting access to the AWS VPC settings, since the business has hosted all apps on AWS. The auditor is located in a faraway location and requires access to AWS in order to see all VPC information. How can the firm satisfy the auditor's requirements without jeopardizing the security of its AWS infrastructure? A. The organization should not accept the request as sharing the credentials means compromising on security. B. Create an IAM role which will have read only access to all EC2 services including VPC and assign that role to the auditor. C. Create an IAM user who will have read only access to the AWS VPC and share those credentials with the auditor. D. The organization should create an IAM user with VPC full access but set a condition that will not allow to modify anything if the request is from any IP other than the organization's data center.

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. The user can create subnets as per the requirement within a VPC. The VPC also works with IAM and the organization can create IAM users who have access to various VPC services. If an auditor wants to have access to the AWS VPC to verify the rules, the organization should be careful before sharing any data which can allow making updates to the AWS infrastructure. In this scenario it is recommended that the organization creates an IAM user who will have read only access to the VPC. Share the above mentioned credentials with the auditor as it cannot harm the organization. The sample policy is given below: { "Effect":"Allow", "Action": [ "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2: DescribeInternetGateways", "ec2:DescribeCustomerGateways", "ec2:DescribeVpnGateways", "ec2:DescribeVpnConnections", "ec2:DescribeRouteTables", "ec2:DescribeAddresses", "ec2:DescribeSecurityGroups", "ec2:DescribeNetworkAcls", "ec2:DescribeDhcpOptions", "ec2:DescribeTags", "ec2:DescribeInstances" ], "Resource":"*" } Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_IAM.html

AWS has introduced T2 instances with credit for CPU consumption. A business has a need that an instance be operational for 24 hours. However, the group is most active between the hours of 11 a.m. and 12 p.m. The firm intends to accomplish this goal by using a T2 tiny instance. If the company has been operating numerous instances since January 2012, which of the following choices should it use when creating a T2 instance? A. The organization must migrate to the EC2-VPC platform first before launching a T2 instance. B. While launching a T2 instance the organization must create a new AWS account as this account does not have the EC2-VPC platform. C. Create a VPC and launch a T2 instance as part of one of the subnets of that VPC. D. While launching a T2 instance the organization must select EC2-VPC as the platform.

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. The user can create subnets as per the requirement within a VPC. The AWS account provides two platforms: EC2-CLASSIC and EC2-VPC, depending on when the user has created his AWS account and which regions he is using. If the user has created the AWS account after 2013-12-04, it supports only EC2-VPC. In this scenario, since the account is before the required date the supported platform will be EC2-CLASSIC. It is required that the organization creates a VPC as the T2 instances can be launched only as a part of VPC. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-migrate.html

As seen below, a user has set two security groups that permit traffic: SecGrp1: 0.0.0.0/0 is inbound on port 80. 0.0.0.0/0 is in route to port 22. SecGrp2: Inbound for 10.10.10.1/32 on port 22 Which of the following assertions is true if both security groups are connected with the same instance? A. It is not possible to have more than one security group assigned to a single instance B. It is not possible to create the security group with conflicting rules. AWS will reject the request C. It allows inbound traffic for everyone on both ports 22 and 80 D. It allows inbound traffic on port 22 for IP 10.10.10.1 and for everyone else on port 80

C A user can attach more than one security group to a single EC2 instance. In this case, the rules from each security group are effectively aggregated to create one set of rules. AWS uses this set of rules to determine whether to allow access or not. Thus, here the rule for port 22 with IP 10.10.10.1/32 will merge with IP 0.0.0.0/0 and open ports 22 and 80 for all. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

Which statement concerning accessing a distant AWS region in the United States using your US-based AWS Direct Connect is NOT true? A. AWS Direct Connect locations in the United States can access public resources in any US region. B. You can use a single AWS Direct Connect connection to build multi-region services. C. Any data transfer out of a remote region is billed at the location of your AWS Direct Connect data transfer rate. D. To connect to a VPC in a remote region, you can use a virtual private network (VPN) connection over your public virtual interface.

C AWS Direct Connect locations in the United States can access public resources in any US region. You can use a single AWS Direct Connect connection to build multi-region services. To connect to a VPC in a remote region, you can use a virtual private network (VPN) connection over your public virtual interface. To access public resources in a remote region, you must set up a public virtual interface and establish a border gateway protocol (BGP) session. Then your router learns the routes of the other AWS regions in the US. You can then also establish a VPN connection to your VPC in the remote region. Any data transfer out of a remote region is billed at the remote region data transfer rate. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/remote_regions.html

Is it possible to alter a set of DHCP options created in a VPC after they have been created? A. Yes, you can modify a set of DHCP options within 48 hours after creation and there are no VPCs associated with them. B. Yes, you can modify a set of DHCP options any time after you create them. C. No, you can't modify a set of DHCP options after you create them. D. Yes, you can modify a set of DHCP options within 24 hours after creation.

C After you create a set of DHCP options, you can't modify them. If you want your VPC to use a different set of DHCP options, you must create a new set and associate them with your VPC. You can also set up your VPC to use no DHCP options at all. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html

Which of the following procedures is not available using the DynamoDB console? A. Updating an item B. Copying an item C. Blocking an item D. Deleting an item

C By using the console to manage DynamoDB, you can perform the following: adding an item, deleting an item, updating an item, and copying an item. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AddUpdateDeleteItems.html

Which of the following features in DynamoDB enables you to trigger alerts when a statistic reaches a preset threshold? A. Alarm Signal B. DynamoDB Analyzer C. CloudWatch D. DynamoDBALARM

C CloudWatch allows you to set alarms when you reach a specified threshold for a metric. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/MonitoringDynamoDB.html

ec2ifscan is one of the components of ec2-net-utils that is used with ENI's. Which of the following statements regarding ec2-net-utils is incorrect? A. ec2-net-utils generates an interface configuration file suitable for use with DHCP. B. ec2-net-utils extends the functionality of the standard if up. C. ec2-net-utils detaches a primary network interface from an instance. D. ec2-net-utils identifies network interfaces when they are attached, detached, or reattached to a running instance.

C Each instance in a VPC has a default elastic network interface (the primary network interface) that is assigned a private IP address from the IP address range of your VPC. You cannot detach a primary network interface from an instance. You can create and attach additional elastic network interfaces. Amazon Linux AMIs may contain additional scripts installed by AWS, known as ec2-net-utils. One of the components that is part of ec2-net-utils used with ENI's is ec2ifscan. Its function is to check for network interfaces that have not been configured and configure them. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Which of the following actions should you take to get started with AWS Direct Connect? A. Complete the Cross Connect B. Configure Redundant Connections with AWS Direct Connect C. Create a Virtual Interface D. Download Router Configuration

C In AWS Direct Connect, your network must support Border Gateway Protocol (BGP) and BGP MD5 authentication, and you need to provide a private Autonomous System Number (ASN) for that to connect to Amazon Virtual Private Cloud (VPC). To connect to public AWS products such as Amazon EC2 and Amazon S3, you will also need to provide a public ASN that you own (preferred) or a private ASN. You have to configure BGP in the Create a Virtual Interface step. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/getstarted.html#createvirtualinterface

What is the maximum length of a role name in Amazon IAM? A. 128 characters B. 512 characters C. 64 characters D. 256 characters

C In Amazon IAM, the maximum length for a role name is 64 characters. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html

An AWS AMI has been developed by a user. The user wishes for the AMI to be accessible exclusively to his buddy. How is this to be managed by the user? A. Share the AMI with the community and setup the approval workflow before anyone launches it. B. It is not possible to share the AMI with the selected user. C. Share the AMI with a friend's AWS account ID. D. Share the AMI with a friend's AWS login ID.

C In Amazon Web Services, if a user has created an AMI and wants to share with his friends and colleagues he can share the AMI with their AWS account ID. Once the AMI is shared the other user can access it from the community AMIs under private AMIs options. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

The ______ operation in DynamoDB may be used to get a full listing of secondary indexes on a table. A. BatchGetItem B. TableName C. DescribeTable D. GetItem

C In DynamoDB, DescribeTable returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

A firm maintains a publicly available application that performs RESTful API calls to a Java-based web service. It is hosted on Apache Tomcat on a single server in a data center that consistently maintains a CPU utilization of 30%. The usage of the API is expected to increase tenfold with the release of a new product. The organization needs a smooth transfer of the program to AWS and the ability for the application to scale in response to demand. The company has already determined that traffic will be rerouted through Amazon Route 53 and CNAME records. How can we achieve these requirements with the MINIMUM feasible effort? A. Create a new IAM policy that allows access to those EC2 instances only for the Security team. Apply this policy to the AWS Organizations master account. B. Create a new tag-based IAM policy that allows access to these EC2 instances only for the Security team. Tag the instances appropriately, and apply this policy in each account. C. Create an organizational unit under AWS Organizations. Move all the accounts into this organizational unit and use SCP to apply a whitelist policy to allow access to these EC2 instances for the Security team only. D. Set up SAML federation for all accounts in AWS. Configure SAML so that it checks for the service API call before authenticating the user. Block SAML from authenticating API calls if anyone other than the Security team accesses these instances.

C Reference: https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/ https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_example-scps.html

A business employed Amazon EC2 instances to deploy a web fleet for the purpose of hosting a blog. The EC2 instances are configured in an Auto Scaling group and are behind an Application Load Balancer (ALB). All blog material is stored on an Amazon EFS disk by the web application. The firm recently implemented a tool that allows bloggers to include video in their postings, which resulted in a tenfold increase in user traffic. Users report experiencing buffering and timeout difficulties when trying to access the site or view videos at busiest periods of the day. Which deployment option is the most cost-effective and scalable in terms of resolving customer issues? A. Reconfigure Amazon EFS to enable maximum I/O. B. Update the blog site to use instance store volumes for storage. Copy the site contents to the volumes at launch and to Amazon S3 at shutdown. C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3. D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.

C Reference: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/

The Database Diagnostic Pack and the Database Tuning Pack are only available with ________ in the Amazon RDS Oracle DB engine. A. Oracle Standard Edition B. Oracle Express Edition C. Oracle Enterprise Edition D. None of these

C Reference: https://blog.pythian.com/a-most-simple-cloud-is-amazon-rds-for-oracle-right-for-you/

On Amazon EC2, a business is operating a commercial Apache Hadoop cluster. This cluster is used on a regular basis to perform queries on huge files stored on Amazon S3. Amazon S3 data has been vetted and does not need any extra changes. The organization is running queries against the Hadoop cluster and visualizing the data using a commercial business intelligence (BI) application on Amazon EC2. The organization wishes to minimize or eliminate overhead expenses associated with administering the Hadoop cluster and the business intelligence application. The organization wishes to make a seamless transition to a more cost-effective option. The visualization is straightforward and takes just a few simple aggregation processes. Which solution best meets the needs of the business? A. Launch a transient Amazon EMR cluster daily and develop an Apache Hive script to analyze the files on Amazon S3. Shut down the Amazon EMR cluster when the job is complete. Then use Amazon QuickSight to connect to Amazon EMR and perform the visualization. B. Develop a stored procedure invoked from a MySQL database running on Amazon EC2 to analyze the files in Amazon S3. Then use a fast in-memory BI tool running on Amazon EC2 to visualize the data. C. Develop a script that uses Amazon Athena to query and analyze the files on Amazon S3. Then use Amazon QuickSight to connect to Athena and perform the visualization. D. Use a commercial extract, transform, load (ETL) tool that runs on Amazon EC2 to prepare the data for processing. Then switch to a faster and cheaper BI tool that runs on Amazon EC2 to visualize the data from Amazon S3.

C Reference: https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-athena.html https://aws.amazon.com/athena/

A business maintains many AWS accounts for the purpose of hosting IT applications. On every Amazon EC2 instances, an Amazon CloudWatch Logs agent is deployed. The organization want to concentrate all security incidents in a dedicated AWS account for log storage. Security administrators must collect and correlate events across numerous AWS accounts in near-real time. Which solution meets these criteria? A. Create a Log Audit IAM role in each application AWS account with permissions to view CloudWatch Logs, configure an AWS Lambda function to assume the Log Audit role, and perform an hourly export of CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account. B. Configure CloudWatch Logs streams in each application AWS account to forward events to CloudWatch Logs in the logging AWS account. In the logging AWS account, subscribe an Amazon Kinesis Data Firehose stream to Amazon CloudWatch Events, and use the stream to persist log data in Amazon S3. C. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source, and persist the log data in an Amazon S3 bucket inside the logging AWS account. D. Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the logging AWS account, use an AWS Lambda function to read messages from the stream and push messages to Data Firehose, and persist the data in Amazon S3.

C Reference: https://noise.getoto.net/2018/03/03/central-logging-in-multi-account-environments/

An AWS IAM policy's Statement element comprises an array of individual statements. Each each statement is encased in braces { } as a(n) _______ block. A. XML B. JavaScript C. JSON D. AJAX

C The Statement element, of an IAM policy, contains an array of individual statements. Each individual statement is a JSON block enclosed in braces { }. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

How many slices does a dw2.8xlarge node have in Amazon Redshift? A. 16 B. 8 C. 32 D. 2

C The disk storage for a compute node in Amazon Redshift is divided into a number of slices, equal to the number of processor cores on the node. For example, each DW1.XL compute node has two slices, and each DW2.8XL compute node has 32 slices. Reference: http://docs.aws.amazon.com/redshift/latest/dg/t_Distributing_data.html

True or False: Redis is supported by Amazon ElastiCache. A. True, ElastiCache supports the Redis key-value store, but with limited functionalities. B. False, ElastiCache does not support the Redis key-value store. C. True, ElastiCache supports the Redis key-value store. D. False, ElastiCache supports the Redis key-value store only if you are in a VPC environment.

C This is true. ElastiCache supports two open-source in-memory caching engines: 1. Memcached - a widely adopted memory object caching system. ElastiCache is protocol compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service. 2. Redis - a popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master / Slave replication and Multi-AZ which can be used to achieve cross AZ redundancy. Reference: https://aws.amazon.com/elasticache/

Which of the following configurations should be utilized when I/O speed is more essential than fault tolerance? A. SPAN 10 B. RAID 1 C. RAID 0 D. NFS 1

C When I/O performance is more important than fault tolerance, the RAID 0 configuration must be used; for example, as in a heavily used database (where data replication is already set up separately). Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

When you build a table with a hash-and-range key in DynamoDB. A. You must define one or more Local secondary indexes on that table B. You must define one or more Global secondary indexes on that table C. You can optionally define one or more secondary indexes on that table D. You must define one or more secondary indexes on that table

C When you create a table with a hash-and-range key, you can optionally define one or more secondary indexes on that table. A secondary index lets you query the data in the table using an alternate key, in addition to queries against the primary key. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html

For Amazon EC2 difficulties, you must check the cloud-init and cfn logs when troubleshooting AWS CloudFormation. Determine a directory in which these logs will be stored. A. /var/opt/log/ec2 B. /var/log/lastlog C. /var/log/ D. /var/log/ec2

C When you use AWS CloudFormation, you might encounter issues when you create, update, or delete AWS CloudFormation stacks. For Amazon EC2 issues, view the cloud-init and cfn logs. These logs are published on the Amazon EC2 instance in the /var/log/ directory. These logs capture processes and command outputs while AWS CloudFormation is setting up your instance. For Windows, view the EC2Configure service and cfn logs in %ProgramFiles%\Amazon\EC2ConfigService and C:\cfn\log. You can also configure your AWS CloudFormation template so that the logs are published to Amazon CloudWatch, which displays logs in the AWS Management Console so you don't have to connect to your Amazon EC2 instance. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html

If you use and the request originates from an Amazon EC2 instance when applying the policy keys in AWS Direct Connect, the instance's public IP address is checked to determine if access is permitted. A. aws:SecureTransport B. aws:EpochIP C. aws:SourceIp D. aws:CurrentTime

C While implementing the policy keys in Amazon RDS, if you use aws: SourceIp and the request comes from an Amazon EC2 instance, the instance's public IP address is evaluated to determine if access is allowed. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/using_iam.html

You're creating an IAM policy and want to establish permissions for a role. Which of the configuration formats listed below should you use? A. An XML document written in the IAM Policy Language B. An XML document written in a language of your choice C. A JSON document written in the IAM Policy Language D. JSON document written in a language of your choice

C You define the permissions for a role in an IAM policy. An IAM policy is a JSON document written in the IAM Policy Language. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html

In two regions: us-east-1 and eu-west-1, a corporation implemented a three-tier web application. The application must be operational in both areas concurrently. The application's database layer utilizes a single Amazon RDS Aurora database worldwide, with the master located in us-east-1 and a read replica located in eu-west-1. A VPN connects the two locations. The firm wants to guarantee that the application stays online even if all of the application's components fail at the region level. For up to one hour, the program may be in read-only mode. The corporation intends to create two separate Amazon Route 53 record sets, one for each area. How should the business finish the setup to satisfy its needs while ensuring the application's end users experience the least amount of delay possible? (Select two.) A. Use failover routing and configure the us-east-1 record set as primary and the eu-west-1 record set as secondary. Configure an HTTP health check for the web application in us-east-1, and associate it to the us-east-1 record set. B. Use weighted routing and configure each record set with a weight of 50. Configure an HTTP health check for each region, and attach it to the record set for that region. C. Use latency-based routing for both record sets. Configure a health check for each region and attach it to the record set for that region. D. Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the read replica in eu- west-1. E. Configure Amazon RDS event notifications to react to the failure of the database in us-east-1 by invoking an AWS Lambda function that promotes the read replica in eu-west-1.

CE

A firm uses Amazon API Gateway, Amazon DynamoDB, and AWS Lambda to run a blog post application on AWS. At the moment, the application does not utilize API keys to authorize queries. The following is the API model: To get detailed information on a post, use the GET/posts/[postid] method. GET/users[userid] to get user information To get detailed information about a remark, use the GET/comments/[commentid] method. The organization has observed that customers are actively debating themes in the comments area and want to improve user engagement by highlighting remarks as they come in real time. Which design should be adopted to enhance user experience and decrease comment latency? A. Use edge-optimized API with Amazon CloudFront to cache API responses. B. Modify the blog application code to request GET comment[commented] every 10 seconds. C. Use AWS AppSync and leverage WebSockets to deliver comments. D. Change the concurrency limit of the Lambda functions to lower the API response time.

D

Every day, a photo-sharing and publishing firm gets between 10,000 and 150,000 photographs. The firm obtains the photographs from a variety of providers and registered users. The organization is migrating to AWS and want to augment current information via the use of Amazon Rekognition. As an illustration of the extra data, consider the following: list celebrities [name of the personality] wearing [color] looking [happy, sad] near [location example Eiffel Tower in Paris] The organization moved existing picture data to Amazon S3 as part of the cloud migration initiative and instructed consumers to contribute photographs directly to Amazon S3. What actions should the Solutions Architect take to ensure that these criteria are met? A. Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognition. Use Amazon DynamoDB to store the metadata and Amazon ES to create an index. Use a web front-end to provide search capabilities backed by Amazon ES. B. Use Amazon Kinesis to stream data based on an S3 event. Use an application running in Amazon EC2 to extract metadata from the images. Then store the data on Amazon DynamoDB and Amazon CloudSearch and create an index. Use a web front-end with search capabilities backed by CloudSearch. C. Start an Amazon SQS queue based on S3 event notifications. Then have Amazon SQS send the metadata information to Amazon DynamoDB. An application running on Amazon EC2 extracts data from Amazon Rekognition using the API and adds data to DynamoDB and Amazon ES. Use a web front-end to provide search capabilities backed by Amazon ES. D. Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognition. Use Amazon RDS MySQL Multi-AZ to store the metadata information and use Lambda to create an index. Use a web front-end with search capabilities backed by Lambda.

D

Personal identifiable information is logged by a financial services organization in its application logs, which are saved in Amazon S3. The log files must be encrypted at rest to comply with regulatory compliance standards. The security team has specified that the CMK content be generated using the company's on-premises hardware security modules (HSMs). What procedures should the solutions architect take to ensure compliance with these requirements? A. Create an AWS CloudHSM cluster. Create a new CMK in AWS KMS using AWS_CloudHSM as the source for the key material and an origin of AWS_CLOUDHSM. Enable automatic key rotation on the CMK with a duration of 1 year. Configure a bucket policy on the logging bucket that disallows uploads of unencrypted data and requires that the encryption source be AWS KMS. B. Provision an AWS Direct Connect connection, ensuring there is no overlap of the RFC 1918 address space between on-premises hardware and the VPCs. Configure an AWS bucket policy on the logging bucket that requires all objects to be encrypted. Configure the logging application to query the on-premises HSMs from the AWS environment for the encryption key material, and create a unique CMK for each logging event. C. Create a CMK in AWS KMS with no key material and an origin of EXTERNAL. Import the key material generated from the on-premises HSMs into the CMK using the public key and import token provided by AWS. Configure a bucket policy on the logging bucket that disallows uploads of non-encrypted data and requires that the encryption source be AWS KMS. D. Create a new CMK in AWS KMS with AWS-provided key material and an origin of AWS_KMS. Disable this CMK, and overwrite the key material with the key material from the on-premises HSM using the public key and import token provided by AWS. Re-enable the CMK. Enable automatic key rotation on the CMK with a duration of 1 year. Configure a bucket policy on the logging bucket that disallows uploads of non-encrypted data and requires that the encryption source be AWS KMS.

D

Which of the following assertions is accurate in the context of AWS CloudFormation? A. Actual resource names are a combination of the resource ID, stack, and logical resource name. B. Actual resource name is the stack resource name. C. Actual resource name is the logical resource name. D. Actual resource names are a combination of the stack and logical resource name.

D In AWS CloudFormation, actual resource names are a combination of the stack and logical resource name. This allows multiple stacks to be created from a template without fear of name collisions between AWS resources. Reference: https://aws.amazon.com/cloudformation/faqs/

A projection in DynamoDB is__________. A. systematic transformation of the latitudes and longitudes of the locations inside your table B. importing data from your file to a table C. exporting data from a table to your file D. the set of attributes that is copied from a table into a secondary index

D In DynamoDB, a projection is the set of attributes that is copied from a table into a secondary index. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

What bandwidths are presently supported by AWS Direct Connect? A. 10Mbps and 100Mbps B. 10Gbps and 100Gbps C. 100Mbps and 1Gbps D. 1Gbps and 10 Gbps

D AWS Direct Connection currently supports 1Gbps and 10 Gbps. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

When does an AWS Data Pipeline end the computing resources maintained by the AWS Data Pipeline? A. AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 2 hours. B. When the final activity that uses the resources is running C. AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 12 hours. D. When the final activity that uses the resources has completed successfully or failed

D Compute resources will be provisioned by AWS Data Pipeline when the first activity for a scheduled time that uses those resources is ready to run, and those instances will be terminated when the final activity that uses the resources has completed successfully or failed. Reference: https://aws.amazon.com/datapipeline/faqs/

A user is attempting to construct a vault in Amazon Web Services Glacier. The user desires notification enablement. Which of the following settings allows the user to activate AWS console notifications? A. Glacier does not support the AWS console B. Archival Upload Complete C. Vault Upload Job Complete D. Vault Inventory Retrieval Job Complete

D From AWS console the user can configure to have notifications sent to Amazon Simple Notifications Service (SNS). The user can select specific jobs that, on completion, will trigger the notifications such as Vault Inventory Retrieval Job Complete and Archive Retrieval Job Complete. Reference: http://docs.aws.amazon.com/amazonglacier/latest/dev/configuring-notifications-console.html

The default cache port in Amazon ElastiCache is: A. for Memcached 11210 and for Redis 6380. B. for Memcached 11211 and for Redis 6380. C. for Memcached 11210 and for Redis 6379. D. for Memcached 11211 and for Redis 6379.

D In Amazon ElastiCache, you can specify a new port number for your cache cluster, which by default is 11211 for Memcached and 6379 for Redis. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.AuthorizeAccess.html

Will you be able to use the standard Amazon S3 APIs to retrieve EC2 snapshots? A. Yes, you will be able to access using S3 APIs if you have chosen the snapshot to be stored in S3. B. No, snapshots are only available through the Amazon EBS APIs. C. Yes, you will be able to access them using S3 APIs as all snapshots are stored in S3. D. No, snapshots are only available through the Amazon EC2 APIs.

D No, snapshots are only available through the Amazon EC2 APIs. Reference: https://aws.amazon.com/ec2/faqs/

A user is attempting to create a PIOPS volume. What is the maximum PIOPS-to-volume-size ratio that the user should configure? A. 5 B. 10 C. 20 D. 30

D Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. A provisioned IOPS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3000 IOPS must be at least 100 GB. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

A business maintains an on-premises legacy application. The organization wishes to acquire meaningful insights from application logs to improve the program's dependability. The following needs have been communicated to a Solutions Architect: ✑ Utilize AWS to aggregate logs. ✑ Analyze logs for problems automatically. ✑ Notify the Operations team when a preset threshold of errors is exceeded. Which solution satisfies the criteria? A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors. C. Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors. D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.

D Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/kinesis-agentwindows/latest/userguide/what-is-kinesis-agent-windows.htm

Every five minutes, a corporation uploads clickstream data files to Amazon S3. Each file is processed and loaded into an Amazon RDS database using a Python script that runs as a cron job once a day on an Amazon EC2 instance. The cron task processes 24 hours of data in 15 to 30 minutes. The data users request that the data be made accessible immediately. Which option would provide the required result? A. Increase the size of the instance to speed up processing and update the schedule to run once an hour. B. Convert the cron job to an AWS Lambda function and trigger this new function using a cron job on an EC2 instance. C. Convert the cron job to an AWS Lambda function and schedule it to run once an hour using Amazon CloudWatch Events. D. Create an AWS Lambda function that runs when a file is delivered to Amazon S3 using S3 event notifications.

D Reference: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html

A business hosts an application on AWS. An AWS Lambda function authenticates against an Amazon RDS for MySQL database instance using credentials. According to a security risk assessment, these credentials are not cycled often enough. Additionally, the database instance's encryption at rest is disabled. Both of these concerns must be fixed, according to the security team. Which security mitigation method should a solutions architect recommend? A. Configure the Lambda function to store and retrieve the database credentials in AWS Secrets Manager and enable rotation of the credentials. Take a snapshot of the DB instance and encrypt a copy of that snapshot. Replace the DB instance with a new DB instance that is based on the encrypted snapshot. B. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Modify the DB instance and enable encryption. C. Enable IAM DB authentication on the DB instance. Grant the Lambda execution role access to the DB instance. Create an encrypted read replica of the DB instance. Promote the encrypted read replica to be the new primary node. D. Configure the Lambda function to store and retrieve the database credentials as encrypted AWS Systems Manager Parameter Store parameters. Create another Lambda function to automatically rotate the credentials. Create an encrypted read replica of the DB instance. Promote the encrypted read replica to be the new primary node.

D Reference: https://docs.aws.amazon.com/secretsmanager/latest/userguide/enable-rotation-rds.html

A firm is using Elastic Beanstalk to create a highly scalable application. The firm utilizes ELB and RDS in conjunction with VPC. Within the cloud, the company has both public and private subnets. Which of the setups listed below will not work in this scenario? A. To setup RDS in a private subnet and ELB in a public subnet. B. The configuration must have public and private subnets in the same AZ. C. The configuration must have two private subnets in separate AZs. D. The EC2 instance should have a public IP assigned to it.

D The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. If the organization is planning to implement a scalable secure application using RDS, VPC and ELB the organization should follow below mentioned configurations: Setup RDS in a private subnet Setup ELB in a public subnet Since RDS needs a subnet group, the organization should have two private subnets in the same zone The ELB needs private and public subnet to be part of same AZs It is not required that instances should have a public IP assigned to them. The instances can be a part of a private subnet and the organization can setup a corresponding routing mechanism. Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc-rds.html

A user is producing an EBS volume snapshot. Which of the following claims about the generation of an EBS snapshot is incorrect? A. Its incremental B. It is a point in time backup of the EBS volume C. It can be used to create an AMI D. It is stored in the same AZ as the volume

D The EBS snapshots are a point in time backup of the EBS volume. It is an incremental snapshot, but is always specific to the region and never specific to a single AZ. Hence the statement "It is stored in the same AZ as the volume" is incorrect. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

A business is implementing a web application using the JEE stack. The program makes use of the JBoss application server and the MySQL database. The application includes a logging module that records all actions that occur when a JEE application's business function is invoked. Due to the enormous size of the log file, the logging activity takes some time. Which of the following solutions will assist the application in establishing a scalable infrastructure? A. Host the log files on EBS with PIOPS which will have higher I/O. B. Host logging and the app server on separate servers such that they are both in the same zone. C. Host logging and the app server on the same instance so that the network latency will be shorter. D. Create a separate module for logging and using SQS compartmentalize the module such that all calls to logging are asynchronous.

D The organization can always launch multiple EC2 instances in the same region across multiple AZs for HA and DR. The AWS architecture practice recommends compartmentalizing the functionality such that they can both run in parallel without affecting the performance of the main application. In this scenario logging takes a longer time due to the large size of the log file. Thus, it is recommended that the organization should separate them out and make separate modules and make asynchronous calls among them. This way the application can scale as per the requirement and the performance will not bear the impact of logging. Reference: http://www.awsarchitectureblog.com/2014/03/aws-and-compartmentalization.html

What is the maximum amount of data points that a user may provide in a PutMetricRequest in CloudWatch for an HTTP data request? A. 30 B. 50 C. 10 D. 20

D The size of a PutMetricData request of CloudWatch is limited to 8KB for the HTTP GET requests and 40KB for the HTTP POST requests. The user can include a maximum of 20 data points in one PutMetricData request. Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html

The access policy and the trust policy are the two policies that you may associate with an IAM job. By adding the ________ action to the AWS Lambda account principal, the trust policy specifies who may take the position and authorizes authorization in the AWS Lambda account principle. A. aws:AssumeAdmin B. lambda:InvokeAsync C. sts:InvokeAsync D. sts:AssumeRole

D The two policies that you attach to an IAM role are the access policy and the trust policy. Remember that adding an account to the trust policy of a role is only half of establishing the trust relationship. By default, no users in the trusted accounts can assume the role until the administrator for that account grants the users the permission to assume the role by adding the Amazon Resource Name (ARN) of the role to an Allow element for the sts:AssumeRole action. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html


Set pelajaran terkait

ACC 405 COST MANAGEMENT & CONTROL

View Set

Intro to Animal Physiology Study Questions

View Set