Practice Questions

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

1-50: You have a set of EC2 instances hosted in AWS. You have created a role named DemoRole and assigned that role to a policy, but you are unable to use that role with an instance. Why is this the case. Please select : A. You need to create an instance profile and associate it with that specific role. B. You are not able to associate an IAM role with an instance C. You won't be able to use that role with an instance unless you also create a user and associate it with that specific role D. You won't be able to use that role with an instance unless you also create a user group and associate it with that specific role.

Answer - A An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. Option B is invalid because you can associate a role with an instance Option C and D are invalid because using users or user groups is not a pre-requisite

4-11: You want to use Code Deploy to deploy code that is hosted on your github repository. Which of the following additional services can help fulfil this requirement. Please select : A. Use the CodePipeline service B. Use the CodeCommit service C. Use the CodeBatch service D. Use the SQS service

Answer - A The AWS Documentation mentions the following AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. AWS CodePipeline automates the steps required to release your software changes continuously.

4-55: Your application requires long-term storage for backups and other data that you need to keep readily available but with lower cost. Which S3 storage option should you use? Please select : A. Amazon S3 Standard - Infrequent Access B. S3 Standard C. Glacier D. Reduced Redundancy Storage

Answer - A The AWS Documentation mentions the following Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee.

2-51: Your development team is using access keys to develop an application that has access to S3 and DynamoDB. A new security policy has outlined that the credentials should not be older than 2 months , and should be rotated. How can you achieve this Please select : A. Use the application to rotate the keys in every 2 months via the SDK B. Use a script which will query the date the keys are created. If older than 2 months , delete them and recreate new keys C. Delete the user associated with the keys after every 2 months. Then recreate the user again. D. Delete the IAM Role associated with the keys after every 2 months. Then recreate the IAM Role again.

Answer - B One can use the CLI command list-access-keys to get the access keys. This command also returns the "CreateDate" of the keys. If the CreateDate is older than 2 months , then the keys can be deleted. The Returns list-access-keys CLI command returns information about the access key IDs associated with the specified IAM user. If there are none, the action returns an empty list.

3-76: You have a number of Cloudformation stacks in your IT organization. Which of the following commands will help see all the cloudformation stacks which have a completed status? Please select : A. describe-stacks B. list-stacks C. stacks-complete D. list-templates

Answer - B The following is the description of the list-stacks command Returns the summary information for stacks whose status matches the specified StackStatusFilter. Summary information for stacks that have been deleted is kept for 90 days after the stack is deleted. If no stack-status-filter is specified, summary information for all stacks is returned (including existing stacks and stacks that have been deleted).

2-6: For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving Standby state? Please select : A. Detaching B. Terminating:Wait C. Pending D. EnteringStandby

Answer - C

4-31: You have are managing an application that contains Go as the front end , MongoDB for document management and is hosted on a relevant Web server. You pre-bake AMI's with the latest version of the Web server, then user the User Data section to setup the application. You now have a change to the underlying Operating system version and need to deploy that accordingly. How can this be done in the easiest way possible. Please select : A. Create a new EBS Volume with the relevant OS patches and attach it to the EC2 Instance. B. Create a Cloudformation stack with the new AMI and then deploy the application accordingly. C. Create a new pre-baked AMI with the new OS and use the User Data seciton to deploy the application. D. Create an Opswork stack with the new AMI and then deploy the application accordingly.

Answer - C The best way in this scenario is to continue the same deployment process which was being used and create a new AMI and then use the User Data section to deploy the application.

2-33: If I want CloudFormation stack status updates to show up in a continuous delivery system in as close to real time as possible, how should I achieve this? Please select : A. Use a long-poll on the Resources object in your CloudFormation stack and display those state changes in the UI for the system. B. Use a long-poll on the ListStacksAPI call for your CloudFormation stack and display those state changes in the UI for the system. C. Subscribe your continuous delivery system to an SNS topic that you also tell your CloudFormation stack to publish events into. D. Subscribe your continuous delivery system to an SQS queue that you also tell your CloudFormation stack to publish events into

Answer - C You can monitor the progress of a stack update by viewing the stack's events. The console's Events tab displays each major step in the creation and update of the stack sorted by the time of each event with latest events on top. The start of the stack update process is marked with an UPDATE_IN_PROGRESS event for the stack

1-55: You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements: - All log entries must be retained by the system, even during unplanned instance failure. - The customer insight team requires immediate access to the logs from the past 7 days. - The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available. How would you meet these requirements in a cost-effective manner? Choose three answers from the options below Please select : A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon S3 once an hour. B. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3. C. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days. D. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon S3 once an hour. E. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour. F. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

Answer - C , E and F Since all logs need to be stored indefinitely, Glacier is the best option for this. One can use Lifecycle events to stream the data from S3 to Glacier Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

3-75: What would you set in your CloudFormation template to fire up different instance sizes based off of environment type? i.e. (If this is for prod, use m1.large instead of t1.micro) Please select : A. Outputs B. Resources C. Mappings D. conditions

Answer - D The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas.

2-13: You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with? Please select : A. Route53 Health Checks B. CloudWatch Health Checks C. AWS ELB Health Checks D. EC2 Health Checks

Answer - A Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor one of the following: The health of a specified resource, such as a web server The status of an Amazon CloudWatch alarm The status of other health checks

5-20: Which of the following resource is used in Cloudformation to create nested stacks Please select : A. AWS::CloudFormation::Stack B. AWS::CloudFormation::Nested C. AWS::CloudFormation::NestedStack D. AWS::CloudFormation::StackNest

Answer - A The AWS Documentation mentions the following A nested stack is a stack that you create within another stack by using the AWS::CloudFormation::Stack resource. With nested stacks, you deploy and manage all resources from a single stack. You can use outputs from one stack in the nested stack group as inputs to another stack in the group

3-46: Which of the following are components of the AWS Pipeline service. Choose 2 answers from the options given below Please select : A. Pipeline definition B. Task Runner C. Task History D. Workflow Runner

Answer - A and B The AWS Documentation mentions the following on AWS Pipeline The following components of AWS Data Pipeline work together to manage your data: A pipeline definition specifies the business logic of your data management. A pipeline schedules and runs tasks. You upload your pipeline definition to the pipeline, and then activate the pipeline. You can edit the pipeline definition for a running pipeline and activate the pipeline again for it to take effect. You can deactivate the pipeline, modify a data source, and then activate the pipeline again. When you are finished with your pipeline, you can delete it. Task Runner polls for tasks and then performs those tasks. For example, Task Runner could copy log files to Amazon S3 and launch Amazon EMR clusters. Task Runner is installed and runs automatically on resources created by your pipeline definitions. You can write a custom task runner application, or you can use the Task Runner application that is provided by AWS Data Pipeline.

5-6: When using EC2 instances with the Code Deploy service, which of the following are some of the pre-requisites to ensure that the EC2 instances can work with Code Deploy. Choose 2 answers from the options given below Please select : A. Ensure an IAM role is attached to the instance so that it can work with the Code Deploy Service. B. Ensure the EC2 Instance is configured with Enhanced Networking C. Ensure the EC2 Instance is placed in the default VPC D. Ensure that the CodeDeploy agent is installed on the EC2 Instance

Answer - A and D

3-49: Which of the following are Lifecycle events available in Opswork? Choose 3 answers from the options below Please select : A. Setup B. Decommision C. Deploy D. Shutdown

Answer - A,C and D

2-24: Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements? Please select : A. Use OpsWorks Stacks with three layers to model the layering in your stack. B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. C. Use AWS Config to declare a configuration set that AWS should roll out to your cloud. D. Use Elastic Beanstalk Linked Applications, passing the important DNS entires between layers using the metadata interface.

Answer - B As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.

4-8: Which of the following is false when it comes to using the Elastic Load balancer with Opsworks stacks? Please select : A. You can attach only one load balancer to a layer. B. You can use either the Application or Classic Load Balancer with OpsWorks stacks. C. Each load balancer can handle only one layer. D. You need to create the load balancer before hand and then attach it to the Opswork stack

Answer - B The AWS Documentation mentions the following To use Elastic Load Balancing with a stack, you must first create one or more load balancers in the same region by using the Elastic Load Balancing console, CLI, or API. You should be aware of the following: · You can attach only one load balancer to a layer. · Each load balancer can handle only one layer. · AWS OpsWorks Stacks does not support Application Load Balancer. You can only use Classic Load Balancer with AWS OpsWorks Stacks.

2-71: Which of the following is the right sequence of initial steps in the deployment of application revisions using Code Deploy 1) Specify deployment configuration 2) Upload revision 3) Create application 4) Specify deployment group Please select : A. 3,2,1 and 4 B. 3,1,2 and 4 C. 3,4,1 and 2 D. 3,4,2 and 1

Answer - C

2-48: You have deployed an application to AWS which makes use of Autoscaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment? Please select : A. Use Elastic Beanstalk to deploy the new application with the new instance type B. Use Cloudformation to deploy the new application with the new instance type C. Create a new launch configuration with the new instance type D. Create new EC2 instances with the new instance type and attach it to the Autoscaling Group

Answer - C The ideal way is to create a new launch configuration , attach it to the existing Auto Scaling group, and terminate the running instances. Option A is invalid because Elastic beanstalk cannot launch new instances on demand. Since the current scenario requires Autoscaling , this is not the ideal option Option B is invalid because this will be a maintenance overhead , since you just have an Autoscaling Group. There is no need to create a whole Cloudformation template for this. Option D is invalid because Autoscaling Group will still launch EC2 instances with the older launch configuration

1-80: You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What's the best design for this system, using DynamoDB? Please select : A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching. B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching. C. DynamoDB table with 100x higher read than write throughput, with ElastiCache caching. D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching.

Answer - D Because the 100x read ratio is mostly driven by a small subset, with caching, only a roughly equal number of reads to writes will miss the cache, since the supermajority will hit the top 1% scores. Knowing we need to set the values roughly equal when using caching, we select AWS ElastiCache, because CloudFront cannot directly cache DynamoDB queries, and ElastiCache is an excellent in-memory cache for database queries, rather than a distributed proxy cache for content delivery.

3-77: If you're trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure? Please select : A. Configure Rolling Deployments. B. Configure Enhanced Health Reporting. C. Configure Blue-Green Deployments. D. Configure a Dead Letter Queue.

Answer - D The AWS documentation mentions the following on dead-letter queues Amazon SQS supports dead-letter queues. A dead-letter queue is a queue that other (source) queues can target for messages that can't be processed (consumed) successfully. You can set aside and isolate these messages in the dead-letter queue to determine why their processing doesn't succeed.

1-72: For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load? Please select : A. Terminating B. Detaching C. Terminating:Wait D. EnteringStandby

Answer - A

3-57: Which of the following CLI commands is used to spin up new EC2 Instances? Please select : A. aws ec2 run-instances B. aws ec2 create-instances C. aws ec2 new-instances D. aws ec2 launch-instances

Answer - A

2-15: You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit. Which of these is a violation of this policy? Please select : A. ELB SSL termination. B. ELB Using Proxy Protocol v1. C. CloudFront Viewer Protocol Policy set to HTTPS redirection. D. Telling S3 to use AES256 on the server-side.

Answer - A If you use SSL termination, your servers will always get non-secure connections and will never know whether users used a more secure channel or not.

2-47:You need your CI to build AMIs with code pre-installed on the images on every new code push. You need to do this as cheaply as possible. How do you do this? Please select : A. Bid on spot instances just above the asking price as soon as new commits come in, perform all instance configuration and setup, then create an AMI based on the spot instance. B. Have the CI launch a new on-demand EC2 instance when new commits come in, perform all instance configuration and setup, then create an AMI based on the on-demand instance. C. Purchase a Light Utilization Reserved Instance to save money on the continuous integration machine. Use these credits whenever your create AMIs on instances. D. When the CI instance receives commits, attach a new EBS volume to the CI machine. Perform all setup on this EBS volume so you don't need a new EC2 instance to create the AMI.

Answer - A Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application's compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

2-8: You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first? Please select : A. AWS Elasticsearch Service B. AWS RedShift C. AWS EMR D. AWS DynamoDB

Answer - A Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Amazon Elasticsearch Service is a fully managed service that delivers Elasticsearch's easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch so that you can go from raw data to actionable insights quickly

2-1: You are planning on using the Amazon RDS facility for Fault tolerance for your application. How does Amazon RDS multi Availability Zone model work Please select : A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. B. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication. C. A second, standby database is deployed and maintained in a different region from master using asynchronous replication. D. A second, standby database is deployed and maintained in a different region from master using synchronous replication.

Answer - A Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Option B is invalid because the replication is synchronous. Option C and D are invalid because this is built around AZ and not regions.

2-22: You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make? Please select : A. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. B. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway. C. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and do not have noisy neighbors. D. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughput.

Answer - A Jumbo frames allow more than 1500 bytes of data by increasing the payload size per packet, and thus increasing the percentage of the packet that is not packet overhead. Fewer packets are needed to send the same amount of usable data. However, outside of a given AWS region (EC2-Classic), a single VPC, or a VPC peering connection, you will experience a maximum path of 1500 MTU. VPN connections and traffic sent over an Internet gateway are limited to 1500 MTU. If packets are over 1500 bytes, they are fragmented, or they are dropped if the Don't Fragment flag is set in the IP header.

2-31: You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach? Please select : A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. B. Set up a DynamoDB Multi-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which the DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. C. Set up a DynamoDB Multi-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB. D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a crossregion ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross- region ELB.

Answer - A Option B and C is invalid because there is no concept of a DynamoDB Multi-Region table Option D is invalid because there is no concept of a cross region ELB. The DynamoDB cross-region replication solution uses the Amazon DynamoDB Cross-Region Replication Library. This library uses DynamoDB Streams to keep DynamoDB tables in sync across multiple regions in near real time. When you write to a DynamoDB table in one region, those changes are automatically propagated by the Cross-Region Replication Library to your tables in other regions.

2-11: You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost? Please select : A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. B. Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. C. Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. D. Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

Answer - A Reserved Instances provide you with a significant discount compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. These On-Demand Instances must match certain attributes in order to benefit from the billing discount.

5-5: Which of the following files needs to be included along with your source code binaries when deploying code using the AWS Code Deploy service Please select : A. appspec.yml B. appconfig.yml C. appspec.json D. appconfig.json

Answer - A The AWS Documentation mentions the below The application specification file (AppSpec file) is a YAML-formatted file used by AWS CodeDeploy to determine: · what it should install onto your instances from your application revision in Amazon S3 or GitHub. · which lifecycle event hooks to run in response to deployment lifecycle events. An AppSpec file must be named appspec.yml and it must be placed in the root of an application's source code's directory structure. Otherwise, deployments will fail.

4-42: Which of the following tools is available to send log data from EC2 Instances. Please select : A. Logs Agent B. CloudWatch Agent C. Logs console. D. Logs Stream

Answer - A The AWS Documentation mentions the following The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. The agent is comprised of the following components: · A plug-in to the AWS CLI that pushes log data to CloudWatch Logs. · A script (daemon) that initiates the process to push data to CloudWatch Logs. · A cron job that ensures that the daemon is always running.

5-44: You are using Elastic beanstalk to deploy an application that consists of a web and application server. There is a requirement to run some python scripts before the application version is deployed to the web server. Which of the following can be used to achieve this? Please select : A. Make use of container commands B. Make use of Docker containers C. Make use of custom resources D. Make use of multiple elastic beanstalk environments

Answer - A The AWS Documentation mentions the following You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.

3-42: You are planning on using AWS Code Deploy in your AWS environment. Which of the below features of AWS Code Deploy can be used to Specify scripts to be run on each instance at various stages of the deployment process Please select : A. AppSpec file B. CodeDeploy file C. Config file D. Deploy file

Answer - A The AWS Documentation mentions the following on AWS Code Deploy An application specification file (AppSpec file), which is unique to AWS CodeDeploy, is a YAML-formatted file used to: Map the source files in your application revision to their destinations on the instance. Specify custom permissions for deployed files. Specify scripts to be run on each instance at various stages of the deployment process.

1-3: You currently have the following setup in AWS 1) An Elastic Load Balancer 2) Autoscaling Group which launches EC Instances 3) AMIs with your code pre-installed You want to deploy the updates to your app to only a certain number of users. You want to have a cost-effective solution. You should also be able to revert back quickly. Which of the below solutions is the most feasible one? Please select : A. Create a second ELB, Auto Scaling. Create the AMI with the new app. Use a new launch configuration. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs. B. Create new AMIs with the new app. Then use the new EC2 instances in half proportion to the older instances. C. Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk versions. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs D. Create a full second stack of instances, cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.

Answer - A The Weighted Routing policy of Route53 can be used to direct a proportion of traffic to your application. The best option is to create a second ELB, attach the new Autoscaling Group and then use Route53 to divert the traffic. Option B is wrong because just having EC2 instances running with the new code will not help. Option C is wrong because Elastic beanstalk is good for development environments , and also there is no mention of having 2 environments where environment url's can be swapped. Option D is wrong because you still need Route53 to split the traffic.

4-65: You are responsible for an application that leverages the Amazon SDK and Amazon EC2 roles for storing and retrieving data from Amazon S3, accessing multiple DynamoDB tables, and exchanging message with Amazon SQS queues. Your VP of Compliance is concerned that you are not following security best practices for securing all of this access. He has asked you to verify that the application's AWS access keys are not older than six months and to provide control evidence that these keys will be rotated a minimum of once every six months. Which option will provide your VP with the requested information? Please select : A. Create a script to query the IAM list-access keys API to get your application access key creation date and create a batch process to periodically create a compliance report for your VP. B. Provide your VP with a link to IAM AWS documentation to address the VP's key rotation concerns. C. Update your application to log changes to its AWS access key credential file and use a periodic Amazon EMR job to create a compliance report for your VP D. Create a new set of instructions for your configuration management tool that will periodically create and rotate the application's existing access keys and provide a compliance report to your VP

Answer - A The list-access keys API returns information about the access key IDs associated with the specified IAM user. If there are none, the action returns an empty list.

4-57: A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier? Please select : A. Elastic Load Balancing, Amazon EC2, and Auto Scaling B. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3 C. Amazon RDS with Multi-AZ and Auto Scaling D. Amazon EC2, Amazon Dynamo DB, and Amazon S3

Answer - A The question mentioned a scalable web tier and not a database tier. So Option C, D and B are already automated eliminated, since we do not need a database option.

2-35: You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point. You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements? Please select : A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMIs with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs. B. Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed. C. Create AMIs with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code. D. Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.

Answer - A This is an example of a Blue Green Deployment You can shift traffic all at once or you can do a weighted distribution. With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you're using Elastic Load Balancing

2-30: You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue? Please select : A. Some of the new jobs coming in are malformed and unprocessable. B. The routing tables changed and none of the workers can process events anymore. C. Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue. D. The scaling metric is not functioning correctly.

Answer - A This question is more on the grounds of validating each option Option B is invalid , because the Route table would have an effect on all worker processes and no jobs would have been completed. Option C is invalid because if the IAM Role was invalid then no jobs would be completed. Option D is invalid because the scaling is happening , its just that the jobs are not getting completed.

4-67: The development team has developed a new feature that uses an AWS service and wants to test it from inside a staging VPC. How should you test this feature with the fastest turnaround time? Please select : A. Launch an Amazon Elastic Compute Cloud (EC2) instance in the staging VPC in response to a development request, and use configuration management to set up the application. Run any testing harnesses to verify application functionality and then use Amazon Simple Notification Service (SNS) to notify the development team of the results. B. Use an Amazon EC2 instance that frequently polls the version control system to detect the new feature, use AWS CloudFormation and Amazon EC2 user data to run any testing harnesses to verify application functionality and then use Amazon SNS to notify the development team of the results. C. Use an Elastic Beanstalk application that polls the version control system to detect the new feature, use AWS CloudFormation and Amazon EC2 user data to run any testing harnesses to verify application functionality and then use Amazon Kinesis to notify the development team of the results. D. Use AWS CloudFormation to launch an Amazon EC2 instance use Amazon EC2 user data to run any testing harnesses to verify application functionality and then use Amazon Kinesis to notify the development team of the results.

Answer - A Using Amazon Kinesis would just take more time in setup and would not be ideal to notify the relevant team in the shortest time possible. Since the test needs to be conducted in the staging VPC , it is best to launch the EC2 in the staging VPC.

2-74: When you application is loaded onto an Opswork stack , which of the following event is triggered by Opswork Please select : A. Deploy B. Setup C. Configure D. Shutdown

Answer - A When you deploy an application, AWS OpsWorks Stacks triggers a Deploy event, which runs each layer's Deploy recipes. AWS OpsWorks Stacks also installs stack configuration and deployment attributes that contain all of the information needed to deploy the app, such as the app's repository and database connection data. For more information on the Deploy event please refer to the below link:

2-27: Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks? Please select : A. AWS Config B. Amazon CloudWatch Metrics C. AWS CloudTrail D. Amazon CloudWatch Logs

Answer - A You can monitor your stacks in the following ways. AWS OpsWorks Stacks uses Amazon CloudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack. AWS OpsWorks Stacks integrates with AWS CloudTrail to log every AWS OpsWorks Stacks API call and store the data in an Amazon S3 bucket. You can use Amazon CloudWatch Logs to monitor your stack's system, application, and custom logs

4-37: You are planning on configuring logs for your Elastic Load balancer. At what intervals does the logs get produced by the Elastic Load balancer service. Choose 2 answers from the options given below Please select : A. 5 minutes B. 60 minutes C. 1 minute D. 30 seconds

Answer - A and B The AWS Documentation mentions Elastic Load Balancing publishes a log file for each load balancer node at the interval you specify. You can specify a publishing interval of either 5 minutes or 60 minutes when you enable the access log for your load balancer. By default, Elastic Load Balancing publishes logs at a 60-minute interval.

5-66: You are a Devops Engineer for your company. There is a requirement to log each time an Instance is scaled in or scaled out from an existing Autoscaling Group. Which of the following steps can be implemented to fulfil this requirement. Each step forms part of the solution. Please select : A. Create a Lambda function which will write the event to Cloudwatch logs B. Create a Cloudwatch event which will trigger the Lambda function. C. Create an SQS queue which will write the event to Cloudwatch logs D. Create a Cloudwatch event which will trigger the SQS queue.

Answer - A and B The AWS documentation mentions the following You can run an AWS Lambda function that logs an event whenever an Auto Scaling group launches or terminates an Amazon EC2 instance and whether the launch or terminate event was successful.

4-66: One of your engineers has written a web application in the Go Programming language and has asked your DevOps team to deploy it to AWS. The application code is hosted on a Git repository. What are your options? (Select Two) Please select : A. Create a new AWS Elastic Beanstalk application, and configure a Go environment to host your application. Using Git, check out the latest version of the code, configure the local repository for Elastic Beanstalk using the "eb start" command, and then use the "git aws.push" command to deploy the application B. Write a Dockerfile that installs the Go base image and uses Git to fetch your application. Create a new AWS OpsWorks stack that contains a Docker layer that uses the Dockerrun.aws.json file to deploy your container and then use the Dockerfile to automate the deployment. C. Write a Dockerfile that installs the Go base image and fetches your application using Git, Create a new AWS Elastic Beanstalk application and use this Dockerfile to automate the deployment. D. Write a Dockerfile that installs the Go base image and fetches your application using Git, Create an AWS CloudFormation template that creates and associates an AWS::EC2::Instance resource type with an AWS::EC2::Container resource type.

Answer - A and C Opsworks works with Chef recipes and not with Docker containers so Option B and E are invalid. There is no AWS::EC2::Container resource for Cloudformation so Option D is invalid. Below is the documentation on Elastic beanstalk and Docker Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.

2-49: The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? Choose two from the options below Please select : A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent. B. Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail. C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools. D. Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.

Answer - A and C Option B is invalid because Cloudtrail is not designed specifically to take in UDP packets Option D is invalid because there are already Cloudwatch logs available, so there is no need to have specific logs designed for this. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs.

5-4: You are a Devops Engineer and are designing an Opswork stack in AWS. The company has some custom recipes that are part of their on-premise Chef configuration. These same recipes need to be run whenever an instance is launched in Opsworks. Which of the following steps need to be carried out to to ensure this requirement gets fulfilled. Choose 2 answers from the options given below Please select : A. Ensure the custom cookbooks option is set in Opswork stack. B. Ensure the custom cookbooks option is set in Opswork layer. C. Ensure the recipe is placed as part of the Setup Lifecycle event as part of the Layer setting. D. Ensure the recipe is placed as part of the Setup Lifecycle event as part of the Stack setting.

Answer - A and C The AWS Documentation mentions the below Each layer has a set of built-in recipes assigned to each lifecycle event, although some layers lack Undeploy recipes. When a lifecycle event occurs on an instance, AWS OpsWorks Stacks runs the appropriate set of recipes for the associated layer

5-11: You have a development team that is planning for continuous release cycles for their application. They want to use the AWS services available to be able to deploy a web application and also ensure they can rollback to previous versions fairly quickly. Which of the following options can be used to achieve this requirement. Choose 2 answers from the options given below Please select : A. Use the Elastic beanstalk service. Use Application versions and upload the revisions of your application. Deploy the revisions accordingly and rollback to prior versions accordingly. B. Use the Elastic beanstalk service. Create separate environments for each application revision. Revert back to an environment incase the new environment does not work. C. Use the Opswork service to deploy the web instances. Deploy the app to the Opswork web layer. Rollback using the Deploy app in Opswork. D. Use the Cloudformation service. Create separate templates for each application revision and deploy them accordingly

Answer - A and C The AWS documentation mentions the following In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code such as a Java WAR file. An application version is part of an application. Applications can have many versions and each application version is unique. In a running environment, you can deploy any application version you already uploaded to the application or you can upload and immediately deploy a new application version. You might upload multiple application versions to test differences between one version of your web application and another.

2-50: You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below. Please select : A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process. Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics. B. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour. C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. D. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour.

Answer - A and C You can use the CloudWatch Logs agent installer on an existing EC2 instance to install and configure the CloudWatch Logs agent. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html You can publish your own metrics to CloudWatch using the AWS CLI or an API. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. For more information on copying data from S3 to redshift, please refer to the below link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-copydata-redshift.html

2-69: Which of the following Deployment types are available in the CodeDeploy service. Choose 2 answers from the options given below Please select : A. In-place deployment B. Rolling deployment C. Immutable deployment D. Blue/green deployment

Answer - A and D The following deployment types are available 1. In-place deployment: The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. 2. Blue/green deployment: The instances in a deployment group (the original environment) are replaced by a different set of instances (the replacement environment)

1-44: When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? Choose two answers from the options given below Please select : A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system. B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system. C. Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation. D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.

Answer - A and D There is a rich brand of CLI commands available for Ec2 Instances. The CLI is located in the following link: http://docs.aws.amazon.com/cli/latest/reference/ec2/ You can then use the describe instances command to describe the EC2 instances. If you specify one or more instance IDs, Amazon EC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results. http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html You can use the EC2 instances to get those instances which need to be removed from the configuration management system.

5-14: You are a Devops engineer for your company. You have been instructed to deploy docker containers using the Opswork service. How could you achieve this? Choose 2 answers from the options given below Please select : A. Use custom cookbooks for your Opswork stack and provide the Git repository which has the chef recipes for the Docker containers. B. Use Elastic beanstalk to deploy docker containers since this is not possible in Opswork. Then attach the elastic beanstalk environment as a layer in Opswork. C. Use Cloudformation to deploy docker containers since this is not possible in Opswork. Then attach the Cloudformation resources as a layer in Opswork. D. In the App for Opswork deployment, specify the git url for the recipes which will deploy the applications in the docker environment.

Answer - A and D This is mentioned in the AWS documentation AWS OpsWorks lets you deploy and manage application of all shapes and sizes. OpsWorks layers let you create blueprints for EC2 instances to install and configure any software that you want.

5-43: You are using Jenkins as your continuous integration systems for the application hosted in AWS. The builds are then placed on newly launched EC2 Instances. You want to ensure that the overall cost of the entire continuous integration and deployment pipeline is minimized. Which of the below options would meet these requirements? Choose 2 answers from the options given below Please select : A. Ensure that all build tests are conducted using Jenkins before deploying the build to newly launched EC2 Instances. B. Ensure that all build tests are conducted on the newly launched EC2 Instances. C. Ensure the Instances are launched only when the build tests are completed. D. Ensure the Instances are created beforehand for faster turnaround time for the application builds to be placed.

Answer - A and D To ensure low cost , one can carry out the build tests on the Jenkins server itself. Once the build tests are completed , the build can then be transferred onto newly launched EC2 Instances.

3-56: Which of the following run command types are available for opswork stacks? Choose 3 answers from the options given below. Please select : A. Update Custom Cookbooks B. Execure Recipes C. Cofigure D. UnDeploy

Answer - A,B and C The following run command types are available 1) Update Custom Cookbooks - Updates the instances' custom cookbooks with the current version from the repository. This command does not run any recipes. 2) Execute Recipes- Executes a specified set of recipes on the instances 3) Setup - Runs the instances' Setup recipes. 4) Configure - Runs the instances' Configure recipes.

2-56: When creating an Elastic Beanstalk environment using the Wizard , what are the 3 configuration options presented to you Please select : A. Choosing the type of Environment - Web or Worker environment B. Choosing the platform type - Node.js , IIS , etc C. Choosing the type of Notification - SNS or SQS D. Choosing whether you want a highly available environment or not

Answer - A,B and D

4-17: Which of the following are the basic stages of a CI/CD Pipeline. Choose 3 answers from the options below Please select : A. Source Control B. Build C. Run D. Production

Answer - A,B and D

4-33: Which of the following can be configured as targets for Cloudwatch Events. Choose 3 answers from the options given below Please select : A. Amazon EC2 Instances B. AWS Lambda Functions C. Amazon CodeCommit D. Amazon ECS Tasks

Answer - A,B and D The AWS Documentation mentions the below You can configure the following AWS services as targets for CloudWatch Events: · Amazon EC2 instances · AWS Lambda functions · Streams in Amazon Kinesis Streams · Delivery streams in Amazon Kinesis Firehose · Amazon ECS tasks · SSM Run Command · SSM Automation · Step Functions state machines · Pipelines in AWS CodePipeline · Amazon Inspector assessment templates · Amazon SNS topics · Amazon SQS queues · Built-in targets · The default event bus of another AWS account

2-58: Which of the following services can be used in conjunction with Cloudwatch Logs. Choose the 3 most viable services from the options given below Please select : A. Amazon Kinesis B. Amazon S3 C. Amazon SQS D. Amazon Lambda

Answer - A,B and D The AWS Documentation the following products which can be integrated with Cloudwatch logs 1) Amazon Kinesis - Here data can be fed for real time analysis 2) Amazon S3 - You can use CloudWatch Logs to store your log data in highly durable storage such as S3. 3) Amazon Lambda - Lambda functions can be designed to work with Cloudwatch logs

2-70: Which of the following credentials types are supported by AWS CodeCommit Please select : A. Git Credentials B. SSH Keys C. Username/password D. AWS Access Key

Answer - A,B and D The AWS documentation mentions IAM supports AWS CodeCommit with three types of credentials: Git credentials, an IAM -generated user name and password pair you can use to communicate with AWS CodeCommit repositories over HTTPS. SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with AWS CodeCommit repositories over SSH.

4-76: As part of your deployment pipeline, you want to enable automated testing of your AWS CloudFormation template. What testing should be performed to enable faster feedback while minimizing costs and risk? Select three answers from the options given below Please select : A. Use the AWS CloudFormation Validate Template to validate the syntax of the template B. Use the AWS CloudFormation Validate Template to validate the properties of resources defined in the template. C. Validate the template's is syntax using a general JSON parser. D. Validate the AWS CloudFormation template against the official XSD scheme definition published by Amazon Web Services. E. Update the stack with the template. If the template fails rollback will return the stack and its resources to exactly the same state.

Answer - A,C and E The AWS documentation mentions the following The aws cloudformation validate-template command is designed to check only the syntax of your template. It does not ensure that the property values that you have specified for a resource are valid for that resource. Nor does it determine the number of resources that will exist when the stack is created. To check the operational validity, you need to attempt to create the stack. There is no sandbox or test area for AWS CloudFormation stacks, so you are charged for the resources you create during testing.

1-26: You are doing a load testing exercise on your application hosted on AWS. While testing your Amazon RDS MySQL DB instance, you notice that when you hit 100% CPU utilization on it, your application becomes non- responsive. Your application is read-heavy. What are methods to scale your data tier to meet the application's needs? Choose three answers from the options given below Please select : A. Add Amazon RDS DB read replicas, and have your application direct read queries to them. B. Add your Amazon RDS DB instance to an Auto Scaling group and configure your CloudWatch metric based on CPU utilization. C. Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance. D. Use ElastiCache in front of your Amazon RDS DB to cache common queries. E. Shard your data set among multiple Amazon RDS DB instances. F. Enable Multi-AZ for your Amazon RDS DB instance.

Answer - A,D and E Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput For more information on Read Replica's please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/ Sharding is a common concept to split data across multiple tables in a database For more information on sharding please refer to the below link: https://forums.aws.amazon.com/thread.jspa?messageID=203052 Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases For more information on Elastic Cache please refer to the below link: https://aws.amazon.com/elasticache/ Option B is not an ideal way to scale a database Option C is not ideal to store the data which would go into a database because of the message size Option F is invalid because Multi-AZ feature is only a failover option

4-48: You work as a Devops Engineer for your company. There are currently a number of environments hosted via Elastic beanstalk. There is a requirement to ensure to use the fastest deployment method for changes to the Elastic Beanstalk environment. Which deployment method is the fastest with Elastic Beanstalk? Please select : A. Rolling with additional batch B. All at Once C. Blue/Green D. Rolling

Answer - B

2-18: You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first? Please select : A. Kinesis Firehose + RDS B. Kinesis Firehose + RedShift C. EMR using Hive D. EMR running Apache Spark

Answer - B Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.

4-38: Your application has an Auto Scaling group of three EC2 instances behind an Elastic Load Balancer. Your Auto Scaling group was updated with a new launch configuration that refers to an updated AMI. During the deployment, customers complained that they were receiving several errors even though all instances passed the ELB health checks. How can you prevent this from happening again? Please select : A. Create a new ELB and attach the Autoscaling Group to the ELB B. Create a new launch configuration with the updated AMI and associate it with the Auto Scaling group. Increase the size of the group to six and when instances become healthy revert to three. C. Manually terminate the instances with the older launch configuration. D. Update the launch configuration instead of updating the Autoscaling Group

Answer - B An Auto Scaling group is associated with one launch configuration at a time, and you can't modify a launch configuration after you've created it. To change the launch configuration for an Auto Scaling group, you can use an existing launch configuration as the basis for a new launch configuration and then update the Auto Scaling group to use the new launch configuration. After you change the launch configuration for an Auto Scaling group, any new instances are launched using the new configuration options, but existing instances are not affected. Then to ensure the new instances are launches , change the size of the Autoscaling Group to 6 and once the new instances are launched, change it back to 3.

4-68: A group of developers in your organization want to migrate their existing application into Elastic Beanstalk and want to use Elastic load Balancing and Amazon SQS. They are currently using a custom application server. How would you deploy their system to Elastic Beanstalk? Please select : A. Configure an Elastic Beanstalk platform using AWS OpsWorks deploy it to Elastic Beanstalk and run a script that creates a load balancer and an Amazon SQS queue. B. Use a Docker container that has the third party application server installed on it and that creates the load balancer and an Amazon SQS queue using the application source bundle feature. C. Create a custom Elastic Beanstalk platform that contains the third party application server and runs a script that creates a load balancer and an Amazon SQS queue. D. Configure an AWS OpsWorks stack that installs the third party application server and creates a load balancer and an Amazon SQS queue and then deploys it to Elastic Beanstalk.

Answer - B Below is the documentation on Elastic beanstalk and Docker Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.

1-27: You are administering a continuous integration application that polls version control for changes and then launches new Amazon EC2 instances for a full suite of build tests. What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible? Please select : A. Perform syntax checking on the continuous integration system before launching a new Amazon EC2 instance for build test, unit and integration tests. B. Perform syntax and build tests on the continuous integration system before launching the new Amazon EC2 instance unit and integration tests. C. Perform all tests on the continuous integration system, using AWS OpsWorks for unit, integration, and build tests. D. Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit, integration, and build tests.

Answer - B Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. Option A and D are invalid because you can do build tests on a CI system and not only Syntax tests. And Syntax tests are normally done during coding time and not during the build time. Option C is invalid because Opswork is ideally not used for build and integration tests.

1-36: As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner? Please select : A. Ensure that the I/O block sizes for the test are randomly selected. B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test. C. Ensure that snapshots of the Amazon EBS volumes are created as a backup. D. Ensure that the Amazon EBS volume is encrypted.

Answer - B During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume and any other EBS volumes attached to your instance New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable. Option A is invalid because block sizes are predetermined and should not be randomly selected. Option C is invalid because this is part of continuous integration and hence volumes can be destroyed after the test and hence there should not be snapshots created unnecessarily Option D is invalid because the encryption is a security feature and not part of load tests normally.

1-63: You have an application hosted in AWS. This application was created using Cloudformation Templates and Autoscaling. Now your application has got a surge of users which is decreasing the performance of the application. As per your analysis, a change in the instance type to C3 would resolve the issue. Which of the below option can introduce this change while minimizing downtime for end users? Please select : A. Copy the old launch configuration, and create a new launch configuration with the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances. B. Update the launch configuration in the AWS CloudFormation template with the new C3 instance type. Add an UpdatePolicy attribute to the Auto Scaling group that specifies an AutoScalingRollingUpdate. Run a stack update with the updated template. C. Update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScaling RollingUpdate in order to avoid downtime. D. Update the AWS CloudFormation template that contains the launch configuration with the new C3 instance type. Run a stack update with the updated template, and Auto Scaling will then update the instances one at a time with the new instance type.

Answer - B Ensure first that the cloudformation template is updated with the new instance type. The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. Option A is invalid because this will cause an interruption to the users. Option C is partially correct , but it does not have all the steps as mentioned in option B. Option D is partially correct , but we need the AutoScalingRollingUpdate attribute to ensure a rolling update is performed.

5-73: A company is running three production web server reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below Please select : A. Consider using on-demand instances instead of reserved EC2 instances B. Consider not using a Multi-AZ RDS deployment for the development database C. Consider using spot instances instead of reserved EC2 instances D. Consider removing the Elastic Load Balancer

Answer - B Multi-AZ databases is better for production environments rather than for development environments, so you can reduce costs by not using this for development environments Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

4-26: You have implemented a system to automate deployments of your configuration and application dynamically after an Amazon EC2 instance in an Auto Scaling group is launched. Your system uses a configuration management tool that works in a standalone configuration, where there is no master node. Due to the volatility of application load, new instances must be brought into service within three minutes of the launch of the instance operating system. The deployment stages take the following times to complete: 1) Installing configuration management agent: 2mins 2) Configuring instance using artifacts: 4mins 3) Installing application framework: 15mins 4) Deploying application code: 1min What process should you use to automate the deployment using this type of standalone agent configuration? Please select : A. Configure your Auto Scaling launch configuration with an Amazon EC2 UserData script to install the agent, pull configuration artifacts and application code from an Amazon S3 bucket, and then execute the agent to configure the infrastructure and application. B. Build a custom Amazon Machine Image that includes all components pre-installed, including an agent, configuration artifacts, application frameworks, and code. Create a startup script that executes the agent to configure the system on startup. C. Build a custom Amazon Machine Image that includes the configuration management agent and application framework pre-installed. Configure your Auto Scaling launch configuration with an Amazon EC2 UserData script to pull configuration artifacts and application code from an Amazon S3 bucket, and then execute the agent to configure the system. D. Create a web service that polls the Amazon EC2 API to check for new instances that are launched in an Auto Scaling group. When it recognizes a new instance, execute a remote script via SSH to install the agent, SCP the configuration artifacts and application code, and finally execute the agent to configure the system

Answer - B Since the new instances need to be brought up in 3 minutes , hence the best option is to pre-bake all the components into an AMI. If you try to user the User Data option , it will just take time , based on the time mentioned in the question to install and configure the various components.

2-9: You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use? Please select : A. SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes. B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. C. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch. D. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.

Answer - B Since the time required to spin up an instance is required to be fast , its better to create an AMI rather than use User Data. When you use User Data , the script will be run during boot up , and hence this will be slower. An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

2-39: Your team wants to begin practicing continuous delivery using CloudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment? Please select : A. Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS. B. Model your stack in one template, so you can leverage CloudFormation's state management and dependency resolution to propagate all changes. C. Use CloudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure. D. Parametrize the template and use Mappings to ensure your template works in multiple Regions.

Answer - B Some of the best practices for Cloudformation are Created Nested stacks As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. Reuse Templates After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same.

3-2: Which of the following is not a rolling type update which is present for Configuration Updates when it comes to the Elastic Beanstalk service Please select : A. Rolling based on Health B. Rolling based on Instances C. Immutable D. Rolling based on time

Answer - B The AWS Documentation mentions 1) With health-based rolling updates, Elastic Beanstalk waits until instances in a batch pass health checks before moving on to the next batch. 2) For time-based rolling updates, you can configure the amount of time that Elastic Beanstalk waits after completing the launch of a batch of instances before moving on to the next batch. This pause time allows your application to bootsrap and start serving requests. 3) Immutable environment updates are an alternative to rolling updates that ensure that configuration changes that require replacing instances are applied efficiently and safely. If an immutable environment update fails, the rollback process requires only terminating an Auto Scaling group. A failed rolling update, on the other hand, requires performing an additional rolling update to roll back the changes.

2-76: Which of the following Cache Engines does Opswork have built in support for? Please select : A. Redis B. Memcache C. Both Redis and Memcache D. There is no built in support as of yet for any cache engine

Answer - B The AWS Documentation mentions AWS OpsWorks Stacks provides built-in support for Memcached. However, if Redis better suits your requirements, you can customize your stack so that your application servers use ElastiCache Redis.

5-1: What is the amount of time that Opswork stacks services waits for a response from an underlying instance before deeming it as a failed instance? Please select : A. 1 minute. B. 5 minutes. C. 20 minutes. D. 60 minutes

Answer - B The AWS Documentation mentions Every instance has an AWS OpsWorks Stacks agent that communicates regularly with the service. AWS OpsWorks Stacks uses that communication to monitor instance health. If an agent does not communicate with the service for more than approximately five minutes, AWS OpsWorks Stacks considers the instance to have failed.

4-14: You are creating a cloudformation templates which takes in a database password as a parameter. How can you ensure that the password is not visible when anybody tries to describes the stack Please select : A. Use the password attribute for the resource B. Use the NoEcho property for the parameter value C. Use the hidden property for the parameter value D. Set the hidden attribute for the Cloudformation resource.

Answer - B The AWS Documentation mentions For sensitive parameter values (such as passwords), set the NoEcho property to true. That way, whenever anyone describes your stack, the parameter value is shown as asterisks (*****).

2-57: An EC2 instance has failed a health check. What will the ELB do? Please select : A. The ELB will terminate the instance B. The ELB stops sending traffic to the instance that failed its health check C. The ELB does nothing D. The ELB will replace the instance

Answer - B The AWS Documentation mentions The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

4-1: You are in charge of designing a number of Cloudformation templates for your organization. You need to ensure that no one can update the stack production based resources. How can this be achieved in the most efficient way? Please select : A. Create tags for the resources and then create IAM policies to protect the resources. B. Use a Stack based policy to protect the production based resources. C. Use S3 bucket policies to protect the resources. D. Use MFA to protect the resources

Answer - B The AWS Documentation mentions When you create a stack, all update actions are allowed on all resources. By default, anyone with stack update permissions can update all of the resources in the stack. During an update, some resources might require an interruption or be completely replaced, resulting in new physical IDs or completely new storage. You can prevent stack resources from being unintentionally updated or deleted during a stack update by using a stack policy. A stack policy is a JSON document that defines the update actions that can be performed on designated resources.

4-7: You are in charge of designing a number of Cloudformation templates for your organization. You are required to make changes to the stack resources every now and then based on the requirement. How can you check the impact of the change to resources in a cloudformation stack before deploying changes to the stack? Please select : A. There is no way to control this. You need to check for the impact beforehand. B. Use Cloudformation change sets to check for the impact to the changes. C. Use Cloudformation Stack Policies to check for the impact to the changes. D. Use Cloudformation Rolling Updates to check for the impact to the changes.

Answer - B The AWS Documentation mentions When you need to update a stack, understanding how your changes will affect running resources before you implement them can help you update stacks with confidence. Change sets allow you to preview how proposed changes to a stack might impact your running resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set. You can create and manage change sets using the AWS CloudFormation console, AWS CLI, or AWS CloudFormation API.

4-35: When deploying applications to Elastic Beanstalk , which of the following statements is false with regards to application deployment Please select : A. The application can be bundled in a zip file B. Can include parent directories C. Should not exceed 512 MB in size D. Can be a war file which can be deployed to the application server

Answer - B The AWS Documentation mentions When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle. Your source bundle must meet the following requirements: · Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) · Not exceed 512 MB · Not include a parent folder or top-level directory (subdirectories are fine)

4-28: Your team is responsible for an AWS Elastic Beanstalk application. The business requires that you move to a continuous deployment model, releasing updates to the application multiple times per day with zero downtime. What should you do to enable this and still be able to roll back almost immediately in an emergency to the previous version? Please select : A. Enable rolling updates in the Elastic Beanstalk environment, setting an appropriate pause time for application startup. B. Create a second Elastic Beanstalk environment running the new application version, and swap the environment CNAMEs. C. Develop the application to poll for a new application version in your code repository; download and install to each running Elastic Beanstalk instance. D. Create a second Elastic Beanstalk environment with the new application version, and configure the old environment to redirect clients, using the HTTP 301 response code, to the new environment

Answer - B The AWS Documentation mentions the below Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short period of time. It is possible to avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly

4-62: You recently encountered a major bug in your web application during a deployment cycle. During this failed deployment, it took the team four hours to roll back to a previously working state, which left customers with a poor user experience. During the post-mortem, you team discussed the need to provide a quicker, more robust way to roll back failed deployments. You currently run your web application on Amazon EC2 and use Elastic Load Balancing for your load balancing needs. Which technique should you use to solve this problem? Please select : A. Create deployable versioned bundles of your application. Store the bundle on Amazon S3. Re-deploy your web application on Elastic Beanstalk and enable the Elastic Beanstalk auto - rollback feature tied to CloudWatch metrics that define failure. B. Use an AWS OpsWorks stack to re-deploy your web application and use AWS OpsWorks DeploymentCommand to initiate a rollback during failures. C. Create deployable versioned bundles of your application. Store the bundle on Amazon S3. Use an AWS OpsWorks stack to redeploy your web application and use AWS OpsWorks application versioning to initiate a rollback during failures. D. Using Elastic BeanStalk redeploy your web application and use the Elastic BeanStalk API to trigger a FailedDeployment API call to initiate a rollback to the previous version.

Answer - B The AWS Documentation mentions the following AWS DeploymentCommand has a rollback option in it. Following commands are available for apps to use: deploy: Deploy App. Ruby on Rails apps have an optional args parameter named migrate. Set Args to {"migrate":["true"]} to migrate the database. The default setting is {"migrate":["false"]}. The "rollback" feature Rolls the app back to the previous version. When we are updating an app, AWS OpsWorks stores the previous versions, maximum of upto five versions. We can use this command to roll an app back as many as four versions.

2-68: Which of the below services can be used to deploy application code content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories Please select : A. CodeCommit B. CodeDeploy C. S3 Lifecycles D. Route53

Answer - B The AWS documentation mentions AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances or on-premises instances in your own facility.

5-17: Your company is planning on using the available services in AWS to completely automate their integration, build and deployment process. They are planning on using AWS CodeBuild to build their artefacts. When using CodeBuild, which of the following files specifies a collection of build commands that can be used by the service during the build process. Please select : A. appspec.yml B. buildspec.yml C. buildspec.xml D. appspec.json

Answer - B The AWS documentation mentions the following AWS CodeBuild currently supports building from the following source code repository providers. The source code must contain a build specification (build spec) file, or the build spec must be declared as part of a build project definition. A build spec is a collection of build commands and related settings, in YAML format, that AWS CodeBuild uses to run a build.

2-43: You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened? Please select : A. Check your CloudTrail log history around the spike's time for any API calls that caused slowness. B. Review CloudWatch Metrics for one minute interval graphs to determine which component(s) slowed the system down. C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency. D. Analyze your logs to detect bursts in traffic at that time.

Answer - B The Cloudwatch metric retention is as follows. If the data points are of a one minute interval , then the graphs will not be available in Cloudwatch Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics. Data points with a period of 60 seconds (1 minute) are available for 15 days Data points with a period of 300 seconds (5 minute) are available for 63 days Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months)

2-28: here is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue? Please select : A. The AWS Console is down, so your CLI commands do not work. B. S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes. C. AWS turns off the DeployCode API call when there are major outages, to protect from system floods. D. None of the other answers make sense. If EC2 is not affected, it must be some other issue.

Answer - B The EBS Snapshots are stored in S3 , so if you have an scripts which deploy EC2 Instances , the EBS volumes need to be constructed from snapshots stored in S3. You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

3-68: Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to process this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct? Please select : A. Use SQS for passing job messages. Use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage. B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier. D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

Answer - B The best option for reduces costs is Glacier, since anyway in the on-premise location everything was stored on tape. Hence option A is out. Next SQS should be used, since RabbitMQ was used internally. Hence option D is out. The first step is to leave the objects in S3 and not tamper with that. Hence option B is more suited.

2-23: Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this? Please select : A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacity. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again. B. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency. C. Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning. D. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.

Answer - B The ideal case is that the right metric is not being used for the scale up and down. Option A is not valid because it mentions that the cooldown is not happening when the traffic decreases, that means the metric threshold for the scale down is not occurring in Cloudwatch Option C is not valid because increasing the Cloudwatch alarm metric will not ensure that the instances scale down when the traffic decreases. Option D is not valid because the question does not mention any constraints that points to the instance size.

5-7: You are a Devops Engineer for your company. The company has a number of Cloudformation templates in AWS. There is a concern from the IT Security department and they want to know who all use the Cloudformation stacks in the company's AWS account. Which of the following can be done to take care of this security concern? Please select : A. Enable Cloudwatch events for each cloudformation stack to track the resource creation events. B. Enable Cloudtrail logs so that the API calls can be recorded C. Enable Cloudwatch logs for each cloudformation stack to track the resource creation events. D. Connect SQS and Cloudformation so that a message is published for each resource created in the Cloudformation stack.

Answer - B This is given as a best practice in the AWS documentation AWS CloudTrail tracks anyone making AWS CloudFormation API calls in your AWS account. API calls are logged whenever anyone uses the AWS CloudFormation API, the AWS CloudFormation console, a back-end console, or AWS CloudFormation AWS CLI commands. Enable logging and specify an Amazon S3 bucket to store the logs. That way, if you ever need to, you can audit who made what AWS CloudFormation call in your account

2-4: You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable. How should you design this system? Please select : A. Use a large RedShift cluster to perform the analysis, and a fleet of Lambdas to perform record inserts into the RedShift tables. Lambda will scale rapidly enough for the traffic spikes. B. Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. C. Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spikes. Spark on EMR outputs the analysis to S3, which are sent out via email. D. Use AWS Elasticsearch service and EC2 Auto Scaling groups. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalable. Use Kibana to generate reports periodically.

Answer - B When you look at building reports or analyzing data from a large data set, you need to consider EMR because this service is built on the Hadoop framework which is used to processes large data sets. The ideal approach to getting data onto EMR is to use S3. Since the Data is extremely spikey and geographically distributed , using edge locations via Cloudfront distributions is the best way to fetch the data. Option A is invalid because RedShift is more of a petabyte storage cluster. Option C is invalid because having both Kinesis and EMR for the job analysis is redundant. Option D is invalid because Elastic Search is not an option for processing records.

5-8: Your company has a number of Cloudformation stacks defined in AWS. As part of the routine housekeeping activity, a number of stacks have been targeted for deletion. But a few of the stacks are not getting deleted and are failing when you are trying to delete them. Which of the following could be valid reasons for this? Choose 2 answers from the options given below Please select : A. The stacks were created with the wrong template version. Since the standard template version is now higher , it is preventing the deletion of the stacks. You need to contact AWS support. B. The stack has an S3 bucket defined which has objects present in it. C. The stack as a EC2 Security which has EC2 Instances attached to it. D. The stack consists of an EC2 resource which was created with a custom AMI.

Answer - B and C The AWS documentation mentions the below point Some resources must be empty before they can be deleted. For example, you must delete all objects in an Amazon S3 bucket or remove all instances in an Amazon EC2 security group before you can delete the bucket or security group

1-21: You are a DevOps engineer for a company. You have been requested to create a rolling deployment solution that is cost-effective with minimal downtime. How should you achieve this? Choose two answers from the options below Please select : A. Re-deploy your application using a CloudFormation template to deploy Elastic Beanstalk B. Re-deploy with a CloudFormation template, define update policies on Auto Scaling groups in your CloudFormation template C. Use update stack policies with CloudFormation to deploy new code D. After each stack is deployed, tear down the old stack

Answer - B and C The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. Option A is invalid because it not efficient to use Cloudformation to use Elastic Beanstalk. Option D is invalid because this is an inefficient process to tear down stacks when there are stack policies available

3-25: There is a requirement for an application hosted on a VPC to access the On-premise LDAP server. The VPC and the On-premise location are connected via an IPSec VPN. Which of the below are the right options for the application to authenticate each user. Choose 2 answers from the options below Please select : A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials. B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access any AWS resources. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate AWS service. D. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate AWS service.

Answer - B and C When you have the need for an in-premise environment to work with a cloud environment, you would normally have 2 artefacts for authentication purposes An identity store - So this is the on-premise store such as Active Directory which stores all the information for the user's and the groups they below to. An identity broker - This is used as an intermediate agent between the on-premise location and the cloud environment. In Windows you have a system known as Active Directory Federation services to provide this facility. Hence in the above case, you need to have an identity broker which can work with the identity store and the Security Token service in aws.

1-13: During metric analysis, your team has determined that the company's website is experiencing response times during peak hours that are higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows. How can you improve your Auto Scaling policy to reduce this high response time? Choose 2 answers. Please select : A. Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have better fine-grain insight. B. Increase your Auto Scaling group's number of max servers. C. Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer. D. Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.

Answer - B and D Option B makes sense because maybe the max servers is low hence the application cannot handle the peak load. Option D helps in ensuring Autoscaling can scale the group on the right metrics.

5-23: Which of the following are ways to ensure that data is secured while in transit when using the AWS Elastic load balancer. Choose 2 answers from the options given below Please select : A. Use a TCP front end listener for your ELB B. Use an SSL front end listener for your ELB C. Use an HTTP front end listener for your ELB D. Use an HTTPS front end listener for your ELB

Answer - B and D The AWS documentation mentions the following You can create a load balancer that uses the SSL/TLS protocol for encrypted connections (also known as SSL offload). This feature enables traffic encryption between your load balancer and the clients that initiate HTTPS sessions, and for connections between your load balancer and your EC2 instances.

1-17: You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? Choose two answers from the options given below. Please select : A. Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning. B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning. C. Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances. Use these credentials to securely access the Amazon S3 bucket when deploying code. D. Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application's Amazon EC2 instances with this role. E. Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis. F. Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon S3 bucket to your Amazon EC2 instances.

Answer - B and D You can add another layer of protection by enabling MFA Delete on a versioned bucket. Once you do so, you must provide your AWS account's access keys and a valid code from the account's MFA device in order to permanently delete an object version or suspend or reactivate versioning on the bucket. For more information on MFA please refer to the below link: https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/ IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles For more information on Roles for EC2 please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html Option A is invalid because this will not address either the integrity or security concern completely. Option C is invalid because user credentials should never be used in EC2 instances to access AWS resources. Option E and F are invalid because AWS Pipeline is an unnecessary overhead when you already have inbuilt controls to manager security for S3.

4-9: Which of the following are true with regard to Opsworks stack Instances? Choose 3 answers from the options given below. Please select : A. A stacks instances can be a combination of both Linux and Windows based operating systems. B. You can use EC2 Instances that were created outisde the boundary of Opswork. C. You can use instances running on your own hardware. D. You can start and stop instances manually

Answer - B,C and D The AWS Documentation mentions the following 1) You can start and stop instances manually or have AWS OpsWorks Stacks automatically scale the number of instances. You can use time-based automatic scaling with any stack; Linux stacks also can use load-based scaling. 2) In addition to using AWS OpsWorks Stacks to create Amazon EC2 instances, you can also register instances with a Linux stack that were created outside of AWS OpsWorks Stacks. This includes Amazon EC2 instances and instances running on your own hardware. However, they must be running one of the supported Linux distributions. You cannot register Amazon EC2 or on-premises Windows instances. 3) A stack's instances can run either Linux or Windows. A stack can have different Linux versions or distributions on different instances, but you cannot mix Linux and Windows instances.

1-38: You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? Choose three answers from the options given below Please select : A. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk Cloudwatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric. B. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch. C. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered. D. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric. E. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.

Answer - B,D and E You can use CloudWatch Logs to monitor applications and systems using log data CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest For more information on Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html Amazon CloudWatch uses Amazon SNS to send email. First, create and subscribe to an SNS topic. When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state. For more information on SNS and Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

4-20: Which of the following CLI commands can be used to describe the stack resources. Please select : A. aws cloudformation describe-stack B. aws cloudformation describe-stack-resources C. aws cloudformation list-stack-resources D. aws cloudformation list-stack

Answer - C

4-53: You have defined a Linux based instance stack in Opswork. You now want to attach a database to the Opswork stack. Which of the below is an important step to ensure that the application on the Linux instances can communicate with the database Please select : A. Add another stack with the database layer and attach it to the application stack. B. Configure SSL so that the instance can communicate with the database C. Add the appropriate driver packages to ensure the application can work with the database D. Configure database tags for the Opswork application layer

Answer - C

2-7: You have deployed a Cloudformation template which is used to spin up resources in your account. Which of the following status in Cloudformation represents a failure. Please select : A. UPDATE_COMPLETE_CLEANUP_IN_PROGRESS B. DELETE_COMPLETE_WITH_ARTIFACTS C. ROLLBACK_IN_PROGRESS D. ROLLBACK_FAILED

Answer - C AWS CloudFormation provisions and configures resources by making calls to the AWS services that are described in your template. After all the resources have been created, AWS CloudFormation reports that your stack has been created. You can then start using the resources in your stack. If stack creation fails, AWS CloudFormation rolls back your changes by deleting the resources that it created.

2-45: Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 2000 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue? Please select : A. Your API Gateway deployment is throttling your requests. B. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization. C. You did not request a limit increase on concurrent Lambda function executions. D. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

Answer - C Every Lambda function is allocated with a fixed amount of specific resources regardless of the memory allocation, and each function is allocated with a fixed amount of code storage per function and per account. By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000.

1-35: After reviewing the last quarter's monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost? Please select : A. Update your Amazon S3 buckets' lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application's bucket. B. Create a new DynamoDB table. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3. Any time a new object is uploaded, update the application's internal Amazon S3 object metadata cache from DynamoDB. C. Using Amazon SNS, create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB table. D. Upload all files to an ElastiCache file cache server. Update your application to now read all file metadata from the ElastiCache file cache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

Answer - C Option A is an invalid option since Lifecycle policies are normally used for expiration of objects or archival of objects. Option B is partially correct where you store the data in DynamoDB, but then the number of GET requests would still be high if the entire DynamoDB table had to be traversed and each object compared and updated in S3. Option D is invalid because uploading all files to Elastic Cache is not an ideal solution. The best option is to have a notification which can then trigger an update to the application to update the DynamoDB table accordingly.

4-59: You have an ELB on AWS which has a set of web servers behind them. There is a requirement that the SSL key used to encrypt data is always kept secure. Secondly the logs of ELB should only be decrypted by a subset of users. Which of these architectures meets all of the requirements? Please select : A. Use Elastic Load Balancing to distribute traffic to a set of web servers. To protect the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key. B. Use Elastic Load Balancing to distribute traffic to a set of web servers. Use TCP load balancing on the load balancer and configure your web servers to retrieve the private key from a private Amazon S3 bucket on boot. Write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. C. Use Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. D. Use Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.

Answer - C The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM. Option D is wrong with the CloudHSM option because of the ephemeral volume which this is temporary storag

4-36: You have an Autoscaling Group which is launching a set of t2.small instances. You now need to replace those instances with a larger instance type. How would you go about making this change in an ideal manner? Please select : A. Change the Instance type in the current launch configuration to the new instance type. B. Create another Autoscaling Group and attach the new instance type. C. Create a new launch configuration with the new instance type and update your Autoscaling Group. D. Change the Instance type of the Underlying EC2 instance directly.

Answer - C The AWS Documentation mentions A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to launch the instance. When you create an Auto Scaling group, you must specify a launch configuration. You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can't modify a launch configuration after you've created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.

5-28: Your company has an application hosted in AWS which makes use of DynamoDB. There is a requirement from the IT security department to ensure that all source IP addresses which make calls to the DynamoDB tables are recorded. Which of the following services can be used to ensure this requirement is fulfilled. Please select : A. AWS Code Commit B. AWS Code Pipeline C. AWS CloudTrail D. AWS Cloudwatch

Answer - C The AWS Documentation mentions the following DynamoDB is integrated with CloudTrail, a service that captures low-level API requests made by or on behalf of DynamoDB in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures calls made from the DynamoDB console or from the DynamoDB low-level API. Using the information collected by CloudTrail, you can determine what request was made to DynamoDB, the source IP address from which the request was made, who made the request, when it was made, and so on. n

4-54: You have an Opswork stack defined with Linux instances. You have executed a recipe , but the execution has failed. What is one of the ways that you can use to diagnose what was the reason why the recipe did not execute correctly. Please select : A. Use AWS Cloudtrail and check the Opswork logs to diagnose the error B. Use AWS Config and check the Opswork logs to diagnose the error C. Log into the instance and check if the recipe was properly configured. D. Deregister the instance and check the EC2 Logs

Answer - C The AWS Documentation mentions the following If a recipe fails, the instance will end up in the setup_failed state instead of online. Even though the instance is not online as far as AWS OpsWorks Stacks is concerned, the EC2 instance is running and it's often useful to log in to troubleshoot the issue. For example, you can check whether an application or custom cookbook is correctly installed. The AWS OpsWorks Stacks built-in support for SSH and RDP login is available only for instances in the online state.

1-39: You are using CloudFormation to launch an EC2 instance and then configure an application after the instance is launched. You need the stack creation of the ELB and Auto Scaling to wait until the EC2 instance is launched and configured properly. How do you do this? Please select : A. It is not possible for the stack creation to wait until one service is created and launched B. Use the WaitCondition resource to hold the creation of the other dependent resources C. Use a CreationPolicy to wait for the creation of the other dependent resources D. Use the HoldCondition resource to hold the creation of the other dependent resources

Answer - C When you provision an Amazon EC2 instance in an AWS CloudFormation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a CreationPolicy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds. A CreationPolicy instructs CloudFormation to wait on an instance until CloudFormation receives the specified number of signals Option A is invalid because this is possible Option B is invalid because this is used make AWS CloudFormation pause the creation of a stack and wait for a signal before it continues to create the stack

3-45: An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software? Please select : A. AWS Elastic Beanstalk B. AWS Cloudfront C. AWS Cloudformation D. AWS DevOps

Answer - C When you want to automate deployment, the automatic choice is Cloudformation. Below is the excerpt from AWS on cloudformation. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software

1-58: You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this? Please select : A. Use CloudFormation templates B. Configuration Management Templates C. Saved Configurations D. Saved Templates

Answer - C You can save your environment's configuration as an object in Amazon S3 that can be applied to other environments during environment creation, or applied to a running environment. Saved configurations are YAML formatted templates that define an environment's platform configuration, tier, configuration option settings, and tags.

3-69: Your company has recently extended its datacenter into a VPC on AWS. There is a requirement for on-premise users manage AWS resources from the AWS console. You don't want to create IAM users for them again. Which of the below options will fit your needs for authentication? Please select : A. Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your members to sign in to the AWS Management Console. B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your members to sign in to the AWS Management Console. C. Use your on-premises SAML 2 O-compliant identity provider (IDP) to grant the members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. D. Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable members to sign in to the AWS Management Console.

Answer - C You can use a role to configure your SAML 2.0-compliant IdP and AWS to permit your federated users to access the AWS Management Console. The role grants the user permissions to carry out tasks in the console.

1-51: You are using Elastic Beanstalk to manage your application. You have a SQL script that needs to only be executed once per deployment no matter how many EC2 instances you have running. How can you do this? Please select : A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to false. B. Use Elastic Beanstalk version and a configuration file to execute the script, ensuring that the "leader only" flag is set to true. C. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true. D. Use a "leader command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "container only" flag is set to true.

Answer - C You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted. You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader-only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type.

1-43: After a daily scrum with your development teams, you've agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement? Please select : A. Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types. B. Using an AWS CloudFormation template, re-deploy your application behind a load balancer, launch a new AWS CloudFormation stack during each deployment, update your load balancer to send half your traffic to the new stack while you test, after verification update the load balancer to send 100% of traffic to the new stack, and then terminate the old stack. C. Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new identical Auto Scaling group, and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group. D. Using an AWS OpsWorks stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning, during deployment create a new version of your application, tell OpsWorks to launch the new version behind your load balancer, and when the new version is launched, terminate the old OpsWorks stack.

Answer - C A blue group carries the production load while a green group is staged and deployed with the new code. When it's time to deploy, you simply attach the green group to the existing load balancer to introduce traffic to the new environment. For HTTP/HTTPS listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state,

1-47: You have a web application that's developed in Node.js The code is hosted in Git repository. You want to now deploy this application to AWS. Which of the below 2 options can fulfil this requirement. Please select : A. Create an Elastic Beanstalk application. Create a Docker file to install Node.js. Get the code from Git. Use the command "aws git.push" to deploy the application B. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Container resources type. With UserData, install Git to download the Node.js application and then set it up. C. Create a Docker file to install Node.js. and gets the code from Git. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk application. D. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Instance resource type and an AMI with Docker pre-installed. With UserData, install Git to download the Node.js application and then set it up.

Answer - C and D Option A is invalid because there is no "aws git.push" command Option B is invalid because there is no AWS::EC2::Container resource type. Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. For more information on Docker and Elastic beanstalk please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). For more information on Ec2 User data please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

4-2: You are in charge of designing a Cloudformation template which deploys a LAMP stack. After deploying a stack , you see that the status of the stack is showing as CREATE_COMPLETE , but the apache server is still not up and running and is experiencing issues while starting up. You want to ensure that the stack creation only shows the status of CREATE_COMPLETE after all resources defined in the stack are up and running. How can you achieve this? Choose 2 answers from the options given below. Please select : A. Define a stack policy which defines that all underlying resources should be up and running before showing a status of CREATE_COMPLETE. B. Use lifecycle hooks to mark the completion of the creation and configuration of the underlying resource. C. Use the CreationPolicy to ensure it is associated with the EC2 Instance resource. D. Use the CFN helper scripts to signal once the resource configuration is complete.

Answer - C and D The AWS Documentation mentions When you provision an Amazon EC2 instance in an AWS CloudFormation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a CreationPolicy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds.

1-8: Your application stores sensitive information on and EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below Please select : A. Unmount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume B. It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption. C. Copy an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted. D. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume

Answer - C and D These steps are given in the AWS documentation To migrate data between encrypted and unencrypted volumes 1) Create your destination volume (encrypted or unencrypted, depending on your need). 2) Attach the destination volume to the instance that hosts the data to migrate. 3) Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at /mnt/destination and mount the destination volume there. 4) Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.

3-54: A gaming company adopted AWS Cloud Formation to automate load-testing of their games. They have created an AWS Cloud Formation template for each gaming environment and one for the load-testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS Cloud Formation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis. Select possible solutions that allow access to the test results after the AWS Cloud Formation load -testing stack is deleted. Choose 2 answers. Please select : A. Define an Amazon RDS Read-Replica in the load-testing AWS Cloud Formation stack and define a dependency relation between master and replica via the Depends On attribute. B. Define an update policy to prevent deletion of the Amazon RDS database after the AWS Cloud Formation stack is deleted. C. Define a deletion policy of type Retain for the Amazon RDS resource to assure that the RDS database is not deleted with the AWS Cloud Formation stack. D. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS Cloud Formation stack is deleted. E. Define automated backups with a backup retention period of 30 days for the Amazon RDS database and perform point-in-time recovery of the database after the AWS Cloud Formation stack is deleted

Answer - C and D With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their stacks.

4-63: You are designing an application that contains protected health information. Security and compliance requirements for your application mandate that all protected health information in the application use encryption at rest and in transit. The application uses a three-tier architecture where data flows through the load balancer and is stored on Amazon EBS volumes for processing and the results are stored in Amazon S3 using the AWS SDK. Which of the following two options satisfy the security requirements? (Select two) Please select : A. Use SSL termination on the load balancer, Amazon EBS encryption on Amazon EC2 instances and Amazon S3 with server- side encryption. B. Use SSL termination with a SAN SSL certificate on the load balancer. Amazon EC2 with all Amazon EBS volumes using Amazon EBS encryption, and Amazon S3 with server-side encryption with customer-managed keys. C. Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances. OS-level disk encryption on the Amazon EBS volumes and Amazon S3 with server-side encryption. D. Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances and Amazon S3 with server-side encryption. E. Use SSL termination on the load balancer an SSL listener on the Amazon EC2 instances, Amazon EBS encryption on EBS volumes containing PHL and Amazon S3 with server-side encryption.

Answer - C and E

1-23: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below Please select : A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URI for the appropriate content. B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code. C. Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects. D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user's objects to that bucket. E. Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.

Answer - C and E The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id. Option A is possible but then becomes a maintenance overhead because of the number of buckets. Option B is invalid because IAM users is not a good security practice. Option D is invalid because SMS tokens are not efficient for this requirement.

1-22: You have an Auto Sealing group of Instances that processes messages from an Amazon Simple Queue Service (SQS) queue. The group scales on the size of the queue. Processing Involves calling a third-party web service. The web service is complaining about the number of failed and repeated calls it is receiving from you. You have noticed that when the group scales in, instances are being terminated while they are processing. What cost-effective solution can you use to reduce the number of incomplete process attempts? Please select : A. Create a new Auto Scaling group with minimum and maximum of 2 and instances running web proxy software. Configure the VPC route table to route HTTP traffic to these web proxies. B. Modify the application running on the instances to enable termination protection while it processes a task and disable it when the processing is complete. C. Increase the minimum and maximum size for the Auto Scaling group, and change the scaling policies so they scale less dynamically. D. Modify the application running on the instances to put itself into an Auto Scaling Standby state while it processes a task and return itself to InService when the processing is complete.

Answer - D

4-74: Your public website uses a load balancer and an Auto Scaling group in a virtual private cloud. Your chief security officer has asked you to set up a monitoring system that quickly detects and alerts your team when a large sudden traffic increase occurs. How should you set this up? Please select : A. Set up an Amazon CloudWatch alarm for the Elastic Load Balancing NetworkIn metric and then use Amazon SNS to alert your team. B. Use an Amazon EMR job to run every thirty minutes, analyze the Elastic Load Balancing access logs in a batch manner to detect a sharp increase in traffic and then use the Amazon Simple Email Service to alert your team. C. Use an Amazon EMR job to run every thirty minutes analyze the CloudWatch logs from your application Amazon EC2 instances in a batch manner to detect a sharp increase in traffic and then use the Amazon SNS SMS notification to alert your team D. Set up an Amazon CloudWatch alarm for the Amazon EC2 NetworkIn metric for the Auto Scaling group and then use Amazon SNS to alert your team. E. Set up a cron job to actively monitor the AWS CloudTrail logs for increased traffic and use Amazon SNS to alert your team.

Answer - D

1-71: You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? Please select : A. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. B. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. C. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. D. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer - D Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Amazon Elasticsearch Service is a fully managed service that delivers Elasticsearch's easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch so that you can go from raw data to actionable insights quickly.

2-20: If you're trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure? Please select : A. Configure Rolling Deployments. B. Configure Enhanced Health Reporting C. Configure Blue-Green Deployments. D. Configure a Dead Letter Queue

Answer - D Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter queue is a queue where other (source) queues can send messages that for some reason could not be successfully processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully processed messages. You can then analyze any messages sent to the dead letter queue to try to determine why they were not successfully processed.

1-9: Which Auto Scaling process would be helpful when testing new instances before sending traffic to them, while still keeping them in your Auto Scaling group? Please select : A. Suspend the process AZRebalance B. Suspend the process HealthCheck C. Suspend the process ReplaceUnhealthy D. Suspend the process AddToLoadBalancer

Answer - D If you suspend AddToLoadBalancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume the AddToLoadBalancer process, Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does not add the instances that were launched while this process was suspended. You must register those instances manually. Option A is invalid because this just balances the number of EC2 instances in the group across the Availability Zones in the region Option B is invalid because this just checks the health of the instances. Auto Scaling marks an instance as unhealthy if Amazon EC2 or Elastic Load Balancing tells Auto Scaling that the instance is unhealthy. Option C is invalid because this process just terminates instances that are marked as unhealthy and later creates new instances to replace them.

2-52: You have an application hosted in AWS , which sits on EC2 Instances behind an Elastic Load Balancer. You have added a new feature to your application and are now receving complaints from users that the site has a slow response. Which of the below actions can you carry out to help you pinpoint the issue Please select : A. Use Cloudtrail to log all the API calls , and then traverse the log files to locate the issue B. Use Cloudwatch , monitor the CPU utilization to see the times when the CPU peaked C. Review the Elastic Load Balancer logs D. Create some custom Cloudwatch metrics which are pertinent to the key features of your application

Answer - D Since the issue is occuring after the new feature has been added , it could be relevant to the new feature. Enabling Cloudtrail will just monitor all the API calls of all services and will not benefit the cause. The monitoring of CPU utilization will just reverify that there is an issue but will not help pinpoint the issue. The Elastic Load Balancer logs will also just reverify that there is an issue but will not help pinpoint the issue.

1-57: You've been tasked with improving the current deployment process by making it easier to deploy and reducing the time it takes. You have been tasked with creating a continuous integration (CI) pipeline that can build AMI's. Which of the below is the best manner to get this done. Assume that at max your development team will be deploying builds 5 times a week. Please select : A. Use a dedicated EC2 instance with an EBS Volume. Download and configure the code and then crate an AMI out of that. B. Use OpsWorks to launch an EBS-backed instance, then use a recipe to bootstrap the instance, and then have the CI system use the CreateImage API call to make an AMI from it. C. Upload the code and dependencies to Amazon S3, launch an instance, download the package from Amazon S3, then create the AMI with the CreateSnapshot API call D. Have the CI system launch a new instance, then bootstrap the code and dependencies on that instance, and create an AMI using the CreateImage API call.

Answer - D Since the number of calls is just a few times a week, there are many open source systems such as Jenkins which can be used as CI based systems. Jenkins can be used as an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project. For more information on the Jenkins CI tool please refer to the below link: https://jenkins.io/ Option A and C are partially correct, but since you just have 5 deployments per week, having separate instances which consume costs is not required. Option B is partially correct, but again having a separate system such as Opswork for such a low number of deployments is not required.

2-60: You have carried out a deployment using Elastic Beanstalk , but the application is unavailable. What could be the reason for this Please select : A. You need to configure ELB along with Elastic Beanstalk B. You need to configure Route53 along with Elastic Beanstalk C. There will always be a few seconds of downtime before the application is available D. The cooldown period is not properly configured for Elastic Beanstalk

Answer - D The AWS Documentation mentions Because Elastic Beanstalk uses a drop-in upgrade process, there might be a few seconds of downtime. Use rolling deployments to minimize the effect of deployments on your production environments.

4-25: You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application due to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS CloudFormation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 due to the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, while memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users? Please select : A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances B. Sign into the AWS Management Console and update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScaling RollingUpdate. C. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Run a stack update with the new template. Auto Scaling will then update the instances with the new instance type. D. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Also add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScalingRollingUpdate. Run a stack update with the new template

Answer - D The AWS Documentation mentions the below The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified.

5-3: You have just been assigned to take care of the Automated resources which have been setup by your company in AWS. You are looking at integrating some of the company's chef recipes to be used for the existing Opswork stacks already setup in AWS. By when you go to the recipes section, you cannot see the option to add any recipes. What could be the reason for this. Please select : A. Once you create a stack, you cannot assign custom recipe's, this needs to be done when the stack is created. B. Once you create layers in the stack, you cannot assign custom recipe's , this needs to be done when the layers are created. C. The stack layers were created without the custom cookbooks option. Just change the layer settings accordingly. D. The stacks were created without the custom cookbooks option. Just change the stack settings accordingly.

Answer - D The AWS Documentation mentions the below To have a stack install and use custom cookbooks, you must configure the stack to enable custom cookbooks, if it is not already configured. You must then provide the repository URL and any related information such as a password.

4-52: By default in Opswork , how many application versions can you rollback up to? Please select : A. 1 B. 2 C. 3 D. 4

Answer - D The AWS Documentation mentions the following Restores the previously deployed app version. For example, if you have deployed the app three times and then run Rollback, the server will serve the app from the second deployment. If you run Rollback again, the server will serve the app from the first deployment. By default, AWS OpsWorks Stacks stores the five most recent deployments, which allows you to roll back up to four versions. If you exceed the number of stored versions, the command fails and leaves the oldest version in place.

5-22: Your company is supporting a number of applications that need to be moved to AWS. Initially the thought it moving these applications to the Elastic beanstalk service. When going to the Elastic beanstalk service , you can see that the underlying platform service is not an option in the Elastic beanstalk service. Which of the following options can be used to port your application onto Elastic beanstalk Please select : A. Use the Opswork service to create a stack. In the stack , create a separate custom layer. Deploy the application to this layer and then attach the layer to Elastic beanstalk B. Use custom chef recipe's to deploy your application in Elastic beanstalk. C. Use custom Cloudformation templates to deploy the application into Elastic beanstalk D. Create a docker container for the custom application and then deploy it to Elastic beanstalk.

Answer - D The AWS documentation mentions the following Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.

1-53: You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, although memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users? Please select : A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances. B. Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate. C. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Run a stack update with the new template. Auto Scaling will then update the instances with the new instance type. D. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Also add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate. Run a stack update with the new template. Feedback

Answer - D The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified.

5-46: You have a requirement to automate the creation of EBS Snapshots. Which of the following can be used to achieve this in the best way possible? Please select : A. Create a powershell script which uses the AWS CLI to get the volumes and then run the script as a cron job. B. Use the AWSConfig service to create a snapshot of the AWS Volumes C. Use the AWS CodeDeploy service to create a snapshot of the AWS Volumes D. Use Cloudwatch Events to trigger the snapshots of EBS Volumes

Answer - D The best is to use the inbuilt service from Cloudwatch , as Cloudwatch Events to automate the creation of EBS Snapshots. With Option A , you would be restricted to running the powrshell script on Windows machines and maintaining the script itself. And then you have the overhead of having a separate instance just to run that script. When you go to Cloudwatch events, you can use the Target as EC2 CreateSnapshot API call

1-25: You have an Auto Scaling group with 2 AZs. One AZ has 4 EC2 instances and the other has 3 EC2 instances. None of the instances are protected from scale in. Based on the default Auto Scaling termination policy what will happen? Please select : A. Auto Scaling selects an instance to terminate randomly B. Auto Scaling will terminate unprotected instances in the Availability Zone with the oldest launch configuration. C. Auto Scaling terminates which unprotected instances are closest to the next billing hour. D. Auto Scaling will select the AZ with 4 EC2 instances and terminate an instance

Answer - D The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. When using the default termination policy, Auto Scaling selects an instance to terminate as follows: Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration.

4-77: You set up a web application development environment by using a third party configuration management tool to create a Docker container that is run on local developer machines. What should you do to ensure that the web application and supporting network storage and security infrastructure does not impact your application after you deploy into AWS for staging and production environments? Please select : A. Write a script using the AWS SDK or CLI to deploy the application code from version control to the local development environments staging and production using AWS OpsWorks. B. Define an AWS CloudFormation template to place your infrastructure into version control and use the same template to deploy the Docker container into Elastic Beanstalk for staging and production. C. Because the application is inside a Docker container, there are no infrastructure differences to be taken into account when moving from the local development environments to AWS for staging and production. D. Define an AWS CloudFormation template for each stage of the application deployment lifecycle -development, staging and production -and have tagging in each template to define the environment

Answer - D This conforms to the best practice for Cloudformation templates After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same.

4-77: You set up a web application development environment by using a third party configuration management tool to create a Docker container that is run on local developer machines. What should you do to ensure that the web application and supporting network storage and security infrastructure does not impact your application after you deploy into AWS for staging and production environments? Please select : A. Write a script using the AWS SDK or CLI to deploy the application code from version control to the local development environments staging and production using AWS OpsWorks. B. Define an AWS CloudFormation template to place your infrastructure into version control and use the same template to deploy the Docker container into Elastic Beanstalk for staging and production. C. Because the application is inside a Docker container, there are no infrastructure differences to be taken into account when moving from the local development environments to AWS for staging and production. D. Define an AWS CloudFormation template for each stage of the application deployment lifecycle -development, staging and production -and have tagging in each template to define the environment.

Answer - D This conforms to the best practice for Cloudformation templates After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same.

1-15: You have an application consisting of a stateless web server tier running on Amazon EC2 instances behind load balancer, and are using Amazon RDS with read replicas. Which of the following methods should you use to implement a self-healing and cost-effective architecture? Choose 2 answers from the options given below Please select : A. Set up a third-party monitoring solution on a cluster of Amazon EC2 instances in order to emit custom CloudWatch metrics to trigger the termination of unhealthy Amazon EC2 instances. B. Set up scripts on each Amazon EC2 instance to frequently send ICMP pings to the load balancer in order to determine which instance is unhealthy and replace it. C. Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon RDS DB CPU utilization CloudWatch metric to scale the instances. D. Set up an Auto Scaling group for the web server tier along with an Auto Scaling policy that uses the Amazon EC2 CPU utilization CloudWatch metric to scale the instances. E. Use a larger Amazon EC2 instance type for the web server tier and a larger DB instance type for the data storage layer to ensure that they don't become unhealthy. F. Set up an Auto Scaling group for the database tier along with an Auto Scaling policy that uses the Amazon RDS read replica lag CloudWatch metric to scale out the Amazon RDS read replicas. G. Use an Amazon RDS Multi-AZ deployment.

Answer - D and G The scaling of EC2 Instances in the Autoscaling group is normally done with the metric of the CPU utilization of the current instances in the Autoscaling group Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Option A is invalid because if you already have in-built metrics from Cloudwatch , why would you want to spend more in using a a third-party monitoring solution. Option B is invalid because health checks are already a feature of AWS ELB Option C is invalid because the database CPU usage should not be used to scale the web tier. Option E is invalid because increasing the instance size does not always guarantee that the solution will not become unhealthy. Option F is invalid because increasing Read-Replica's will not suffice for write operations if the primary DB fails.

3-74: Which of the following is not a supported platform on Elastic Beanstalk? Please select : A. Packer Builder B. Go C. Node.js D. Java SE E. Kurbernetes

Answer - E Below is the list of supported platforms · Packer Builder · Single Container Docker · Multicontainer Docker · Preconfigured Docker · Go · Java SE · Java with Tomcat · .NET on Windows Server with IIS · Node.js · PHP · Python · Ruby

4-70: You have a web application that is currently running on a three M3 instances in three AZs. You have an Auto Scaling group configured to scale from three to thirty instances. When reviewing your CloudWatch metrics, you see that sometimes your Auto Scaling group is hosting fifteen instances. The web application is reading and writing to a DynamoDB-configured backend and configured with 800 Write Capacity Units and 800 Read Capacity Units. Your DynamoDB Primary Key is the Company ID. You are hosting 25 TB of data in your web application. You have a single customer that is complaining of long load times when their staff arrives at the office at 9:00 AM and loads the website, which consists of content that is pulled from DynamoDB. You have other customers who routinely use the web application. Choose the answer that will ensure high availability and reduce the customer's access times. Please select : A. Add a caching layer in front of your web application by choosing ElastiCache Memcached instances in one of the AZs. B. Double the number of Read Capacity Units in your DynamoDB instance because the instance is probably being throttled when the customer accesses the website and your web application. C. Change your Auto Scaling group configuration to use Amazon C3 instance types, because the web application layer is probably running out of compute capacity. D. Implement an Amazon SQS queue between your DynamoDB database layer and the web application layer to minimize the large burst in traffic the customer generates when everyone arrives at the office at 9:00AM and begins accessing the website. E. Use data pipelines to migrate your DynamoDB table to a new DynamoDB table with a primary key that is evenly distributed across your dataset. Update your web application to request data from the new table

Answer - E The AWS documentation provide the following information on the best performance for DynamoDB tables The optimal usage of a table's provisioned throughput depends on these factors: · The primary key selection. · The workload patterns on individual items. The primary key uniquely identifies each item in a table. The primary key can be simple (partition key) or composite (partition key and sort key). When it stores data, DynamoDB divides a table's items into multiple partitions, and distributes the data primarily based upon the partition key value. Consequently, to achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values. Distributing requests across partition key values distributes the requests across partitions.

4-70: You have a web application that is currently running on a three M3 instances in three AZs. You have an Auto Scaling group configured to scale from three to thirty instances. When reviewing your CloudWatch metrics, you see that sometimes your Auto Scaling group is hosting fifteen instances. The web application is reading and writing to a DynamoDB-configured backend and configured with 800 Write Capacity Units and 800 Read Capacity Units. Your DynamoDB Primary Key is the Company ID. You are hosting 25 TB of data in your web application. You have a single customer that is complaining of long load times when their staff arrives at the office at 9:00 AM and loads the website, which consists of content that is pulled from DynamoDB. You have other customers who routinely use the web application. Choose the answer that will ensure high availability and reduce the customer's access times. Please select : A. Add a caching layer in front of your web application by choosing ElastiCache Memcached instances in one of the AZs. B. Double the number of Read Capacity Units in your DynamoDB instance because the instance is probably being throttled when the customer accesses the website and your web application. C. Change your Auto Scaling group configuration to use Amazon C3 instance types, because the web application layer is probably running out of compute capacity. D. Implement an Amazon SQS queue between your DynamoDB database layer and the web application layer to minimize the large burst in traffic the customer generates when everyone arrives at the office at 9:00AM and begins accessing the website. E. Use data pipelines to migrate your DynamoDB table to a new DynamoDB table with a primary key that is evenly distributed across your dataset. Update your web application to request data from the new table

Answer - E The AWS documentation provide the following information on the best performance for DynamoDB tables The optimal usage of a table's provisioned throughput depends on these factors: · The primary key selection. · The workload patterns on individual items. The primary key uniquely identifies each item in a table. The primary key can be simple (partition key) or composite (partition key and sort key). When it stores data, DynamoDB divides a table's items into multiple partitions, and distributes the data primarily based upon the partition key value. Consequently, to achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values. Distributing requests across partition key values distributes the requests across partitions.

4-73: Your application uses Amazon SQS and Auto Scaling to process background jobs. The Auto Scaling policy is based on the number of messages in the queue, with a maximum instance count of 100. Since the application was launched, the group has never scaled above 50. The Auto scaling group has now scaled to 100, the queue size is increasing and very few jobs are being completed. The number of messages being sent to the queue is at normal levels. What should you do to identity why the queue size is unusually high and to reduce it? Please select : A. Temporarily increase the Auto Scaling group's desired value to 200. When the queue size has been reduced, reduce it to 50. B. Analyze the application logs to identify possible reasons for message processing failure and resolve the cause for failures. C. Create additional Auto Scaling groups enabling the processing of the queue to be performed in parallel. D. Analyze CloudTrail logs for Amazon SQS to ensure that the instances Amazon EC2 role has permission to receive messages from the queue.

Answer : B Here the best option is to look at the application logs and resolve the failure. You could be having a functionality issue in the application that is causing the messages to queue up and increase the fleet of instances in the Autoscaling group.

4-73: Your application uses Amazon SQS and Auto Scaling to process background jobs. The Auto Scaling policy is based on the number of messages in the queue, with a maximum instance count of 100. Since the application was launched, the group has never scaled above 50. The Auto scaling group has now scaled to 100, the queue size is increasing and very few jobs are being completed. The number of messages being sent to the queue is at normal levels. What should you do to identity why the queue size is unusually high and to reduce it? Please select : A. Temporarily increase the Auto Scaling group's desired value to 200. When the queue size has been reduced, reduce it to 50. B. Analyze the application logs to identify possible reasons for message processing failure and resolve the cause for failures. C. Create additional Auto Scaling groups enabling the processing of the queue to be performed in parallel. D. Analyze CloudTrail logs for Amazon SQS to ensure that the instances Amazon EC2 role has permission to receive messages from the queue

Answer : B Here the best option is to look at the application logs and resolve the failure. You could be having a functionality issue in the application that is causing the messages to queue up and increase the fleet of instances in the Autoscaling group.


Kaugnay na mga set ng pag-aaral

Life and Health Exam Study Guide

View Set

Chapter 6 - International trade theory

View Set

Міжнародний маркетинг 2

View Set