DevOps Pro - Test 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

You are planning on using the Amazon RDS facility for Fault tolerance for your application. How does Amazon RDS multi Availability Zone model work A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. B. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication. C. A second, standby database is deployed and maintained in a different region from master using asynchronous replication. D. A second, standby database is deployed and maintained in a different region from master using synchronous replication.

A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. Answer - A Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. The below diagram from the AWS documentation shows how this is configured Option B is invalid because the replication is synchronous. Option C and D are invalid because this is built around AZ and not regions. For more information on Multi-AZ RDS, please visit the below URL: https://aws.amazon.com/rds/details/multi-az/

Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks? A. AWS Config B. Amazon CloudWatch Metrics C. AWS CloudTrail D. Amazon CloudWatch Logs

A. AWS Config Answer - A You can monitor your stacks in the following ways. AWS OpsWorks Stacks uses Amazon CloudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack. AWS OpsWorks Stacks integrates with AWS CloudTrail to log every AWS OpsWorks Stacks API call and store the data in an Amazon S3 bucket. You can use Amazon CloudWatch Logs to monitor your stack's system, application, and custom logs. For more information on Opswork monitoring, please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/monitoring.html

You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first? A. AWS Elasticsearch Service B. AWS RedShift C. AWS EMR D. AWS DynamoDB

A. AWS Elasticsearch Service Answer - A Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Amazon Elasticsearch Service is a fully managed service that delivers Elasticsearch's easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch so that you can go from raw data to actionable insights quickly For more information on the elastic cache service , please refer to the below link: https://aws.amazon.com/elasticsearch-service/

You have an Autoscaling Group configured to launch EC2 Instances for your application. But you notice that the Autoscaling Group is not launching instances in the right proportion. In fact instances are being launched too fast. What can you do to mitigate this issue? Choose 2 answers from the options given below A. Adjust the cooldown period set for the Autoscaling Group B. Set a custom metric which monitors a key application functionality for the scale-in and scale-out process. C. Adjust the CPU threshold set for the Autoscaling scale-in and scale-out process. D. Adjust the Memory threshold set for the Autoscaling scale-in and scale-out process.

A. Adjust the cooldown period set for the Autoscaling Group B. Set a custom metric which monitors a key application functionality for the scale-in and scale-out process. Answer - A and B The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn't launch or terminate additional instances before the previous scaling activity takes effect. For more information on the cool down period, please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html Also it is better to monitor the application based on a key feature and then trigger the scale-in and scaIe-out feature accordingly. In the question , there is no mention of CPU or memory causing the issue.

Which of the following services can be used in conjunction with Cloudwatch Logs. Choose the 3 most viable services from the options given below A. Amazon Kinesis B. Amazon S3 C. Amazon SQS D. Amazon Lambda

A. Amazon Kinesis B. Amazon S3 D. Amazon Lambda Answer - A,B and D The AWS Documentation the following products which can be integrated with Cloudwatch logs 1) Amazon Kinesis - Here data can be fed for real time analysis 2) Amazon S3 - You can use CloudWatch Logs to store your log data in highly durable storage such as S3. 3) Amazon Lambda - Lambda functions can be designed to work with Cloudwatch logs For more information on Cloudwatch Logs, please refer to the below link: link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

You need your CI to build AMIs with code pre-installed on the images on every new code push. You need to do this as cheaply as possible. How do you do this? A. Bid on spot instances just above the asking price as soon as new commits come in, perform all instance configuration and setup, then create an AMI based on the spot instance. B. Have the CI launch a new on-demand EC2 instance when new commits come in, perform all instance configuration and setup, then create an AMI based on the on-demand instance. C. Purchase a Light Utilization Reserved Instance to save money on the continuous integration machine. Use these credits whenever your create AMIs on instances. D. When the CI instance receives commits, attach a new EBS volume to the CI machine. Perform all setup on this EBS volume so you don't need a new EC2 instance to create the AMI.

A. Bid on spot instances just above the asking price as soon as new commits come in, perform all instance configuration and setup, then create an AMI based on the spot instance. Answer - A Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application's compute capacity and throughput for the same budget, and enable new types of cloud computing applications. For more information on Spot Instances, please visit the below URL: https://aws.amazon.com/ec2/spot/

You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application with the right authentication/authorization implementation. A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. B. Use JWT or SAML compliant systems to build authorization policies. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure. C. Use AWS API Gateway with a constantly rotating API Key to allow access from the client-side. Construct a custom build of the SDK and include S3 access in it. D. Create an AWS oAuth Service Domain ad grant public signup and access to the domain. During setup, add at least one major social media site as a trusted Identity Provider for users.

A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. Answer - A Amazon Cognito lets you easily add user sign-up and sign-in and manage permissions for your mobile and web apps. You can create your own user directory within Amazon Cognito. You can also choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with SAML identity solutions; or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users' devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users' devices so that their app experience remains consistent regardless of the device they use. For more information on AWS Cognito, please visit the below URL: http://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

When creating an Elastic Beanstalk environment using the Wizard , what are the 3 configuration options presented to you A. Choosing the type of Environment - Web or Worker environment B. Choosing the platform type - Node.js , IIS , etc C. Choosing the type of Notification - SNS or SQS D. Choosing whether you want a highly available environment or not

A. Choosing the type of Environment - Web or Worker environment B. Choosing the platform type - Node.js , IIS , etc D. Choosing whether you want a highly available environment or not Answer - A,B and D The below screens are what are presented to you when creating an Elastic Beanstalk environment The high availability preset includes a load balancer; the low cost preset does not For more information on the configuration settings, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-create-wizard.html

Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this? A. Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO. B. Use CloudWatch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic. C. Use AWS IAM credential reports to deliver a CSV of all uses of IAM User Tokens over time to the CTO. D. Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a DynamoDB table. Generate reports based on the contents of this table.

A. Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO. Answer - A AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. For more information on Cloudtrail, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point. You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements? A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMIs with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs. B. Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed. C. Create AMIs with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code. D. Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.

A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMIs with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs. Answer - A This is an example of a Blue Green Deployment You can shift traffic all at once or you can do a weighted distribution. With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you're using Elastic Load Balancing For more information on Blue Green Deployments, please visit the below URL: https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

You have an Opswork stack setup in AWS. You want to install some updates to the Linux instances in the stack. Which of the following can be used to publish those updates. Choose 2 answers from the options given below A. Create and start new instances to replace your current online instances. Then delete the current instances. B. Use Auto-scaling to launch new instances and then delete the older instances C. On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command D. Delete the stack and create a new stack with the instances and their relevant updates

A. Create and start new instances to replace your current online instances. Then delete the current instances. C. On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command Answer - A and C The AWS documentation mentions Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup. On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates on the specified instances. For more information on Opsworks updates please refer to the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/best-practices-updates.html

You have been requested to use CloudFormation to maintain version control and achieve automation for the applications in your organization. How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping cost down? A. Create separate templates based on functionality, create nested stacks with CloudFormation. B. Use CloudFormation custom resources to handle dependencies between stacks C. Create multiple templates in one CloudFormation stack. D. Combine all resources into one template for version control and automation.

A. Create separate templates based on functionality, create nested stacks with CloudFormation. Answer - A As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

When you application is loaded onto an Opswork stack , which of the following event is triggered by Opswork A. Deploy B. Setup C. Configure D. Shutdown

A. Deploy Answer - A When you deploy an application, AWS OpsWorks Stacks triggers a Deploy event, which runs each layer's Deploy recipes. AWS OpsWorks Stacks also installs stack configuration and deployment attributes that contain all of the information needed to deploy the app, such as the app's repository and database connection data. For more information on the Deploy event please refer to the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps.html

You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit. Which of these is a violation of this policy? A. ELB SSL termination. B. ELB Using Proxy Protocol v1. C. CloudFront Viewer Protocol Policy set to HTTPS redirection. D. Telling S3 to use AES256 on the server-side.

A. ELB SSL termination. Answer - A If you use SSL termination, your servers will always get non-secure connections and will never know whether users used a more secure channel or not. If you are using Elastic beanstalk to configure the ELB, you can use the below article to ensure end to end encryption. http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html

Which of the following credentials types are supported by AWS CodeCommit A. Git Credentials B. SSH Keys C. Username/password D. AWS Access Keys

A. Git Credentials B. SSH Keys D. AWS Access Keys Answer - A,B and D The AWS documentation mentions IAM supports AWS CodeCommit with three types of credentials: Git credentials, an IAM -generated user name and password pair you can use to communicate with AWS CodeCommit repositories over HTTPS. SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with AWS CodeCommit repositories over SSH. AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with AWS CodeCommit repositories over HTTPS.

Which of the following Deployment types are available in the CodeDeploy service. Choose 2 answers from the options given below A. In-place deployment B. Rolling deployment C. Immutable deployment D. Blue/green deployment

A. In-place deployment D. Blue/green deployment Answer - A and D The following deployment types are available 1. In-place deployment: The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. 2. Blue/green deployment: The instances in a deployment group (the original environment) are replaced by a different set of instances (the replacement environment) For more information on Code Deploy please refer to the below link: http://docs.aws.amazon.com/codedeploy/latest/userguide/primary-components.html

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below. A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process. Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics. B. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour. C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. D. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour.

A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process. Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics. C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. Answer - A and C You can use the CloudWatch Logs agent installer on an existing EC2 instance to install and configure the CloudWatch Logs agent. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html You can publish your own metrics to CloudWatch using the AWS CLI or an API. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. For more information on copying data from S3 to redshift, please refer to the below link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-copydata-redshift.html

You have launched a cloudformation template , but are receiving a failure notification after the template was launched. What is the default behavior of Cloudformation in such a case A. It will rollback all the resources that were created up to the failure point. B. It will keep all the resources that were created up to the failure point. C. It will prompt the user on whether to keep or terminate the already created resources D. It will continue with the creation of the next resource in the stack

A. It will rollback all the resources that were created up to the failure point. Answer - A The AWS Documentation mentions AWS CloudFormation ensures all stack resources are created or deleted as appropriate. Because AWS CloudFormation treats the stack resources as a single unit, they must all be created or deleted successfully for the stack to be created or deleted. If a resource cannot be created, AWS CloudFormation rolls the stack back and automatically deletes any resources that were created. For more information on Cloudformation, please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost? A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. B. Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. C. Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. D. Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. Answer - A Reserved Instances provide you with a significant discount compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. These On-Demand Instances must match certain attributes in order to benefit from the billing discount. For more information on Reserved Instances , please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html

You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with? A. Route53 Health Checks B. CloudWatch Health Checks C. AWS ELB Health Checks D. EC2 Health Checks

A. Route53 Health Checks Answer - A Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor one of the following: The health of a specified resource, such as a web server The status of an Amazon CloudWatch alarm The status of other health checks For more information on Route53 Health checks , please refer to the below link: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach? A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. B. Set up a DynamoDB Multi-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which the DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. C. Set up a DynamoDB Multi-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB. D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a crossregion ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross- region ELB.

A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions for your application layer in which DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. Answer - A Option B and C is invalid because there is no concept of a DynamoDB Multi-Region table Option D is invalid because there is no concept of a cross region ELB. The DynamoDB cross-region replication solution uses the Amazon DynamoDB Cross-Region Replication Library. This library uses DynamoDB Streams to keep DynamoDB tables in sync across multiple regions in near real time. When you write to a DynamoDB table in one region, those changes are automatically propagated by the Cross-Region Replication Library to your tables in other regions. For more information on DynamoDB Cross Region Replication, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.html

You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue? A. Some of the new jobs coming in are malformed and unprocessable. B. The routing tables changed and none of the workers can process events anymore. C. Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue. D. The scaling metric is not functioning correctly.

A. Some of the new jobs coming in are malformed and unprocessable. Answer - A This question is more on the grounds of validating each option Option B is invalid , because the Route table would have an effect on all worker processes and no jobs would have been completed. Option C is invalid because if the IAM Role was invalid then no jobs would be completed. Option D is invalid because the scaling is happening , it's just that the jobs are not getting completed. For more information on Scaling on Demand, please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make? A. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. B. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway. C. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and do not have noisy neighbors. D. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughput.

A. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. Answer - A Jumbo frames allow more than 1500 bytes of data by increasing the payload size per packet, and thus increasing the percentage of the packet that is not packet overhead. Fewer packets are needed to send the same amount of usable data. However, outside of a given AWS region (EC2-Classic), a single VPC, or a VPC peering connection, you will experience a maximum path of 1500 MTU. VPN connections and traffic sent over an Internet gateway are limited to 1500 MTU. If packets are over 1500 bytes, they are fragmented, or they are dropped if the Don't Fragment flag is set in the IP header. For more information on Jumbo Frames, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances

Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks? A. Use CloudTrail Log File Integrity Validation. B. Use AWS Config SNS Subscriptions and process events in real time. C. Use CloudTrail backed up to AWS S3 and Glacier. D. Use AWS Config Timeline forensics.

A. Use CloudTrail Log File Integrity Validation. Answer - A To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time. For more information on Cloudtrail log file validation, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

You need to create an audit log of all changes to customer banking data. You use DynamoDB to store this customer banking data. It's important not to lose any information due to server failures. What is an elegant way to accomplish this? A. Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to AWS CloudWatch Logs, removing sensitive information before logging. B. Before writing to DynamoDB, do a pre-write acknowledgement to disk on the application server, removing sensitive information before logging. Periodically rotate these log files into S3. C. Use a DynamoDB StreamSpecification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to S3. D. Before writing to DynamoDB, do a pre-write acknowledgement to disk on the application server, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.

A. Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to AWS CloudWatch Logs, removing sensitive information before logging. Answer-A You can use Lambda functions as triggers for your Amazon DynamoDB table. Triggers are custom actions you take in response to updates made to the DynamoDB table. To create a trigger, first you enable Amazon DynamoDB Streams for your table. Then, you write a Lambda function to process the updates published to the stream. For more information on DynamoDB with Lambda, please visit the below URL: http://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html

The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? Choose two from the options below A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent. B. Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail. C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools. D. Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.

A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent. C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools. Answer - A and C Option B is invalid because Cloudtrail is not designed specifically to take in UDP packets Option D is invalid because there are already Cloudwatch logs available, so there is no need to have specific logs designed for this. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs. For more information on Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

You have a requirement to host a cluster of NoSQL databases. There is an expectation that there will be a lot of I/O on these databases. Which EBS volume type is best for high performance NoSQL cluster deployments? A. io1 B. gp1 C. standard D. gp2

A. io1 Answer - A Provisioned IOPS SSD should be used for critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume This is ideal for Large database workloads, such as: MongoDB Cassandra Microsoft SQL Server MySQL PostgreSQL Oracle For more information on the various EBS Volume Types , please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use? A. SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes. B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. C. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch. D. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.

B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. Answer - B Since the time required to spin up an instance is required to be fast , its better to create an AMI rather than use User Data. When you use User Data , the script will be run during boot up , and hence this will be slower. An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need. For more information on the AMI , please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this? A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacity. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again. B. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency. C. Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning. D. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.

B. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency. Answer - B The ideal case is that the right metric is not being used for the scale up and down. Option A is not valid because it mentions that the cooldown is not happening when the traffic decreases, that means the metric threshold for the scale down is not occurring in Cloudwatch Option C is not valid because increasing the Cloudwatch alarm metric will not ensure that the instances scale down when the traffic decreases. Option D is not valid because the question does not mention any constraints that points to the instance size. For an example on using custom metrics for scaling in and out , please follow the below link for a use case. https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics-f396c16e5e6a

Which of the below services can be used to deploy application code content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories A. CodeCommit B. CodeDeploy C. S3 Lifecycles D. Route53

B. CodeDeploy Answer - B The AWS documentation mentions AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances or on-premises instances in your own facility. For more information on Code Deploy please refer to the below link: http://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html

You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What's the best DynamoDB key structure? A. HighestScore as the hash / only key. B. GameID as the hash key, HighestScore as the range key. C. GameID as the hash / only key. D. GameID as the range / only key.

B. GameID as the hash key, HighestScore as the range key. Answer - B It always best to choose the hash key as the column that will have a wide range of values. This is also given in the AWS documentation Next since you need to sort by the Highest Score , you need to use that as the sort key For more information on Table Guidelines, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html

You currently have an application deployed via Elastic Beanstalk. You are now deploying a new application and have ensured that Elastic beanstalk has detached the current instaces and deployed and reattached new instances. But the new instances are still not receiving any sort of traffic. Why is this the case. A. The instances are of the wrong AMI , hence they are not being detected by the ELB. B. It takes time for the ELB to register the instances , hence there is a small time frame before your instances can start receiving traffic C. You need to create a new Elastic Beanstalk application , because you cannot detach and then reattach instances to an ELB within an Elastic Beanstalk application D. The instances needed to be reattached before the new application version was deployed

B. It takes time for the ELB to register the instances , hence there is a small time frame before your instances can start receiving traffic Answer - B Before the EC2 Instances can start receiving traffic , they will be checked via the health checks of the ELB. Once the health checks are successful , the EC2 Instance will change its state to InService and then the EC2 Instances can start receiving traffic. For more information on ELB health checks, please refer to the below link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first? A. Kinesis Firehose + RDS B. Kinesis Firehose + RedShift C. EMR using Hive D. EMR running Apache Spark

B. Kinesis Firehose + RedShift Answer - B Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. For more information on Kinesis firehose, please visit the below URL: https://aws.amazon.com/kinesis/firehose/ Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. For more information on Redshift, please visit the below URL: http://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

Which of the following Cache Engines does Opswork have built in support for? A. Redis B. Memcache C. Both Redis and Memcache D. There is no built in support as of yet for any cache engine

B. Memcache Answer - B The AWS Documentation mentions AWS OpsWorks Stacks provides built-in support for Memcached. However, if Redis better suits your requirements, you can customize your stack so that your application servers use ElastiCache Redis. For more information on Opswork and Cache engines please refer to the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/other-services-redis.html

You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?. A. Model an AWS EMR job in AWS Elastic Beanstalk. B. Model an AWS EMR job in AWS CloudFormation. C. Model an AWS EMR job in AWS OpsWorks. D. Model an AWS EMR job in AWS CLI Composer.

B. Model an AWS EMR job in AWS CloudFormation. Answer - B With AWS CloudFormation, you can update the properties for resources in your existing stacks. These changes can range from simple configuration changes, such as updating the alarm threshold on a CloudWatch alarm, to more complex changes, such as updating the Amazon Machine Image (AMI) running on an Amazon EC2 instance. Many of the AWS resources in a template can be updated, and we continue to add support for more. For more information on Cloudformation version control, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/updating.stacks.walkthrough.html

Your team wants to begin practicing continuous delivery using CloudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment? A. Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS. B. Model your stack in one template, so you can leverage CloudFormation's state management and dependency resolution to propagate all changes. C. Use CloudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure. D. Parametrize the template and use Mappings to ensure your template works in multiple Regions.

B. Model your stack in one template, so you can leverage CloudFormation's state management and dependency resolution to propagate all changes. Answer - B Some of the best practices for Cloudformation are Created Nested stacks As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. Reuse Templates After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same. For more information on Cloudformation best practises, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

When building a multicontainer Docker platform using Elastic Beanstalk , which of the following is required A. DockerFile to create custom images during deployment B. Prebuilt Images stored in a public or private online image repository. C. Kurbernetes to manage the docker containers. D. RedHat Opensift to manage the docker containers.

B. Prebuilt Images stored in a public or private online image repository. Answer - B This is a special note given in the AWS Documentation for Multicontainer Docker platform for Elastic Beanstalk Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. For more information on Multicontainer Docker platform for Elastic Beanstalk, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html

You need to investigate one of the instances which is part of your Autoscaling Group. How would you implement this. A. Suspend the AZRebalance process so that Autoscaling will not terminate the instance B. Put the instance in a standby state C. Put the instance in a InService state D. Suspend the AddToLoadBalancer process

B. Put the instance in a standby state Answer - B The AWS Documentation mentions Auto Scaling enables you to put an instance that is in the InService state into the Standbystate, update or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle application traffic. For more information on the standby state please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-enter-exit-standby.html

You are a DevOps engineer for a company. You have been requested to create a rolling deployment solution that is cost-effective with minimal downtime. How should you achieve this? Choose two answers from the options below A. Re-deploy your application using a CloudFormation template to deploy Elastic Beanstalk B. Re-deploy with a CloudFormation template, define update policies on Auto Scaling groups in your CloudFormation template C. Use update stack policies with CloudFormation to deploy new code D. After each stack is deployed, tear down the old stack

B. Re-deploy with a CloudFormation template, define update policies on Auto Scaling groups in your CloudFormation template C. Use update stack policies with CloudFormation to deploy new code Answer - B and C The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. Option A is invalid because it not efficient to use Cloudformation to use Elastic Beanstalk. Option D is invalid because this is an inefficient process to tear down stacks when there are stack policies available For more information on Autoscaling Rolling Updates please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened? A. Check your CloudTrail log history around the spike's time for any API calls that caused slowness. B. Review CloudWatch Metrics for one minute interval graphs to determine which component(s) slowed the system down. C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency. D. Analyze your logs to detect bursts in traffic at that time.

B. Review CloudWatch Metrics for one minute interval graphs to determine which component(s) slowed the system down. Answer - B The Cloudwatch metric retention is as follows. If the data points are of a one minute interval , then the graphs will not be available in Cloudwatch Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics. Data points with a period of 60 seconds (1 minute) are available for 15 days Data points with a period of 300 seconds (5 minute) are available for 63 days Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months) For more information on Cloudwatch metrics, please visit the below URL: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html

Which of the following is the default deployment mechanism used by Elastic Beanstalk A. All at Once B. Rolling Deployments C. Rolling with additional batch D. Immutable

B. Rolling Deployments Answer - B The AWS documentation mentions AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you configure batch size and health check behavior during deployments. By default, your environment uses rolling deployments if you created it with the console or EB CLI, or all at once deployments if you created it with a different client (API, SDK or AWS CLI). For more information on Elastic Beanstalk deployments, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue? A. The AWS Console is down, so your CLI commands do not work. B. S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes. C. AWS turns off the DeployCode API call when there are major outages, to protect from system floods. D. None of the other answers make sense. If EC2 is not affected, it must be some other issue.

B. S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes. Answer - B The EBS Snapshots are stored in S3 , so if you have an scripts which deploy EC2 Instances , the EBS volumes need to be constructed from snapshots stored in S3. You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume. For more information on EBS Snapshots, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

You need to store a large volume of data. The data needs to be readily accessible for a short period, but then needs to be archived indefinitely after that. What is a cost-effective solution that an help fulfil this requirement. A. Keep all your data in S3 since this is a durable storage. B. Store your data in Amazon S3, and use lifecycle policies to archive to Amazon Glacier C. Store your data in an EBS volume, and use lifecycle policies to archive to Amazon Glacier. D. Store your data in Amazon S3, and use lifecycle policies to archive to S3-Infrequently Access

B. Store your data in Amazon S3, and use lifecycle policies to archive to Amazon Glacier Answer - B Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. For more information on Glacier, please refer to the below link: https://aws.amazon.com/glacier/ Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. For more information on S3 Lifecycle policies, please refer to the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

An EC2 instance has failed a health check. What will the ELB do? A. The ELB will terminate the instance B. The ELB stops sending traffic to the instance that failed its health check C. The ELB does nothing D. The ELB will replace the instance

B. The ELB stops sending traffic to the instance that failed its health check Answer - B The AWS Documentation mentions The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state. For more information on ELB health checks, please refer to the below link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory, because your organization is a power-user of Active Directory. How should you manage your AWS identities in the most simple manner? A. Use AWS Directory Service Simple AD. B. Use AWS Directory Service AD Connector. C. Use an Sync Domain running on AWS Directory Service. D. Use an AWS Directory Sync Domain running on AWS Lambda.

B. Use AWS Directory Service AD Connector. Answer - B AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without caching any information in the cloud. AD Connector comes in two sizes, small and large. A small AD Connector is designed for smaller organizations of up to 500 users. A large AD Connector can support larger organizations of up to 5,000 users. Once set up, AD Connector offers the following benefits: Your end users and IT administrators can use their existing corporate credentials to log on to AWS applications such as Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail. You can manage AWS resources like Amazon EC2 instances or Amazon S3 buckets through IAM role-based access to the AWS Management Console. You can consistently enforce existing security policies (such as password expiration, password history, and account lockouts) whether users or IT administrators are accessing resources in your on-premises infrastructure or in the AWS Cloud. You can use AD Connector to enable multi-factor authentication by integrating with your existing RADIUS-based MFA infrastructure to provide an additional layer of security when users access AWS applications. For more information on the AD Connector, please visit the below URL: http://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html

Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements? A. Use OpsWorks Stacks with three layers to model the layering in your stack. B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. C. Use AWS Config to declare a configuration set that AWS should roll out to your cloud. D. Use Elastic Beanstalk Linked Applications, passing the important DNS entires between layers using the metadata interface.

B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. Answer - B As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. For more information on nested stacks, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#nested

You currently have an application with an Auto Scaling group with an Elastic Load Balancer configured in AWS. After deployment users are complaining of slow response time for your application. Which of the following can be used as a start to diagnose the issue A. Use Cloudwatch to monitor the HealthyHostCount metric B. Use Cloudwatch to monitor the ELB latency C. Use Cloudwatch to monitor the CPU Utilization D. Use Cloudwatch to monitor the Memory Utilization

B. Use Cloudwatch to monitor the ELB latency Answer - B High latency on the ELB side can be caused by several factors, such as: Network connectivity ELB configuration Backend web application server issues For more information on ELB latency, please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/elb-latency-troubleshooting/

You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable. How should you design this system? A. Use a large RedShift cluster to perform the analysis, and a fleet of Lambdas to perform record inserts into the RedShift tables. Lambda will scale rapidly enough for the traffic spikes. B. Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. C. Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spikes. Spark on EMR outputs the analysis to S3, which are sent out via email. D. Use AWS Elasticsearch service and EC2 Auto Scaling groups. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalable. Use Kibana to generate reports periodically.

B. Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. Answer - B When you look at building reports or analyzing data from a large data set, you need to consider EMR because this service is built on the Hadoop framework which is used to processes large data sets. The ideal approach to getting data onto EMR is to use S3. Since the Data is extremely spikey and geographically distributed , using edge locations via Cloudfront distributions is the best way to fetch the data. Option A is invalid because RedShift is more of a petabyte storage cluster. Option C is invalid because having both Kinesis and EMR for the job analysis is redundant. Option D is invalid because Elastic Search is not an option for processing records. For more information on Amazon EMR, please visit the below URL: https://aws.amazon.com/emr/

Your development team is using access keys to develop an application that has access to S3 and DynamoDB. A new security policy has outlined that the credentials should not be older than 2 months , and should be rotated. How can you achieve this A. Use the application to rotate the keys in every 2 months via the SDK B. Use a script which will query the date the keys are created. If older than 2 months , delete them and recreate new keys C. Delete the user associated with the keys after every 2 months. Then recreate the user again. D. Delete the IAM Role associated with the keys after every 2 months. Then recreate the IAM Role again.

B. Use a script which will query the date the keys are created. If older than 2 months , delete them and recreate new keys Answer - B One can use the CLI command list-access-keys to get the access keys. This command also returns the "CreateDate" of the keys. If the CreateDate is older than 2 months , then the keys can be deleted. The Returns list-access-keys CLI command returns information about the access key IDs associated with the specified IAM user. If there are none, the action returns an empty list. For more information on the CLI command, please refer to the below link: http://docs.aws.amazon.com/cli/latest/reference/iam/list-access-keys.html

You want to pass queue messages that are 1GB each. How should you achieve this? A. Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS. B. Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies. C. Use SQS's support for message partitioning and multi-part uploads on Amazon S3. D. Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.

B. Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies. Answer - B You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and consuming messages with a message size of up to 2 GB. To manage Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java. Specifically, you use this library to: Specify whether messages are always stored in Amazon S3 or only when a message's size exceeds 256 KB. Send a message that references a single message object stored in an Amazon S3 bucket. Get the corresponding message object from an Amazon S3 bucket. Delete the corresponding message object from an Amazon S3 bucket. For more information on processing large messages for SQS, please visit the below URL: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html

You currently have EC2 Instances hosting an application. These instances are part of an Autoscaling Group. You now want to change the instance type of the EC2 Instances. How can you manage the deployment with the least amount of downtime A. Terminate the existing Auto Scaling group. Create a new launch configuration with the new Instance type. Attach that to the new Autoscaling Group. B. Use the Rolling Update feature which is available for Autoscaling C. Use the Rolling Update feature which is available for EC2 Instances. D. Manually terminate the instances, launch new instances with the new instance type and attach them to the Autoscaling group

B. Use the Rolling Update feature which is available for Autoscaling Answer - B The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on AutoScaling Rolling Update, please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

Which of the following is the right sequence of initial steps in the deployment of application revisions using Code Deploy 1) Specify deployment configuration 2) Upload revision 3) Create application 4) Specify deployment group A. 3,2,1 and 4 B. 3,1,2 and 4 C. 3,4,1 and 2 D. 3,4,2 and 1

C. 3,4,1 and 2 Answer - C The below diagram from the AWS documentation shows the deployment steps For more information on the deployment steps please refer to the below link: http://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html

There is a requirement to monitor API calls against your AWS account by different users and entities. There needs to be a history of those calls. The history of those calls are needed in in bulk for later review. Which 2 services can be used in this scenario A. AWS Config; AWS Inspector B. AWS CloudTrail; AWS Config C. AWS CloudTrail; CloudWatch Events D. AWS Config; AWS Lambda

C. AWS CloudTrail; CloudWatch Events Answer - C You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management Console, AWS Command Line Interface, AWS SDKs, and other AWS services. For more information on Cloudtrail, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information. For more information on Cloudwatch events, please visit the below URL: http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

Your company uses AWS to host its resources. They have the following requirements 1) Record all API calls and Transitions 2) Help in understanding what resources are there in the account 3) Facility to allow auditing credentials and logins Which services would suffice the above requirements A. AWS Config, CloudTrail, IAM Credential Reports B. CloudTrail, IAM Credential Reports, AWS Config C. CloudTrail, AWS Config, IAM Credential Reports D. AWS Config, IAM Credential Reports, CloudTrail

C. CloudTrail, AWS Config, IAM Credential Reports Answer - C You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management Console, AWS Command Line Interface, AWS SDKs, and other AWS services. For more information on Cloudtrail, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. For more information on the config service, please visit the below URL: https://aws.amazon.com/config/ You can generate and download a credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can get a credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the IAM API. For more information on Credentials Report, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest and cheapest way to reduce costs and scale with spikes like this? A. Create an S3 bucket and asynchronously replicate common requests responses into S3 objects. When a request comes in for a precomputed response, redirect to AWS S3. B. Create another ELB and Auto Scaling Group layer mounted on top of the other system, adding a tier to the system. Serve most read requests out of the top layer. C. Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviours to proxy cache requests which can be served late. D. Create a Memcached cluster in AWS ElastiCache. Create cache logic to serve requests which can be served late from the in-memory cache for increased performance.

C. Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviours to proxy cache requests which can be served late. Answer - C Use Cloudfront distribution for distributing the heavy reads for your application. You can create a zone apex record to point to the Cloudfront distribution. You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin. For more information on Cloudfront object expiration, please visit the below URL: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

You need to grant a vendor access to your AWS account. They need to be able to read protected messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to accomplish this? A. Create an IAM User with API Access Keys. Grant the User permissions to access the bucket. Give the vendor the AWS Access Key ID and AWS Secret Access Key for the User. B. Create an EC2 Instance Profile on your account. Grant the associated IAM role full access to the bucket. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor. C. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use the Role to the vendor AWS account. D. Generate a signed S3 PUT URL and a signed S3 PUT URL, both with wildcard values and 2 year durations. Pass the URLs to the vendor.

C. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use the Role to the vendor AWS account. Answer C You can use AWS Identity and Access Management (IAM) roles and AWS Security Token Service (STS) to set up cross-account access between AWS accounts. When you assume an IAM role in another AWS account to obtain cross-account access to services and resources in that account, AWS CloudTrail logs the cross-account activity For more information on Cross Account Access, please visit the below URL: https://aws.amazon.com/blogs/security/tag/cross-account-access/

There is a requirement for a vendor to have access to an S3 bucket in your account. The vendor already has an AWS account. How can you provide access to the vendor on this bucket. A. Create a new IAM user and grant the relevant access to the vendor on that bucket. B. Create a new IAM group and grant the relevant access to the vendor on that bucket. C. Create a cross-account role for the vendor account and grant that role access to the S3 bucket. D. Create an S3 bucket policy that allows the vendor to read from the bucket from their AWS account.

C. Create a cross-account role for the vendor account and grant that role access to the S3 bucket. Answer - C The AWS documentation mentions You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual IAM users in each account. In addition, users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts. After configuring the role, you see how to use the role from the AWS Management Console, the AWS CLI, and the API For more information on Cross Account Roles Access, please refer to the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

You have deployed an application to AWS which makes use of Autoscaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment? A. Use Elastic Beanstalk to deploy the new application with the new instance type B. Use Cloudformation to deploy the new application with the new instance type C. Create a new launch configuration with the new instance type D. Create new EC2 instances with the new instance type and attach it to the Autoscaling Group

C. Create a new launch configuration with the new instance type Answer - C The ideal way is to create a new launch configuration , attach it to the existing Auto Scaling group, and terminate the running instances. Option A is invalid because Elastic beanstalk cannot launch new instances on demand. Since the current scenario requires Autoscaling , this is not the ideal option Option B is invalid because this will be a maintenance overhead , since you just have an Autoscaling Group. There is no need to create a whole Cloudformation template for this. Option D is invalid because Autoscaling Group will still launch EC2 instances with the older launch configuration For more information on Autoscaling Launch configuration , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html

You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way? A. Create a second master RDS instance and peer the RDS groups. B. Cache all the database responses on the read side with CloudFront. C. Create read replicas for RDS since the load is mostly reads. D. Create a Multi-AZ RDS installs and route read traffic to standby.

C. Create read replicas for RDS since the load is mostly reads. Answer - C Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Option A is invalid because you would need to maintain the synchronization yourself with a secondary instance. Option B is invalid because you are introducing another layer unnecessarily when you already have read replica's Option D is invalid because you only use this for Standy's For more information on Read Replica's , please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/

You work for a very large company that has multiple applications which are very different and built on different programming languages. How can you deploy applications as quickly as possible? A. Develop each app in one Docker container and deploy using ElasticBeanstalk B. Create a Lambda function deployment package consisting of code and any dependencies C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk D. Develop each app in a separate Docker containers and deploy using CloudFormation

C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk Answer - C Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. Option A is an efficient way to use Docker. The entire idea of Docker is that you have a separate environment for various applications. Option B is ideally used to running code and not packaging the applications and dependencies Option D is not ideal deploying Docker containers using Cloudformation For more information on Docker and Elastic Beanstalk, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

Which of the following services can be used to quickly spin up development environments for your development community A. Cloudformation B. Opswork C. Elastic beanstalk D. CodeCommit

C. Elastic beanstalk Answer - C The AWS documentation mentions For more information on Elastic Beanstalk, please visit the below URL: https://aws.amazon.com/elasticbeanstalk/

For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving Standby state? A. Detaching B. Terminating:Wait C. Pending D. EnteringStandby

C. Pending Answer - C The below diagram shows the Lifecycle policy. When the stand-by state is exited, the next state is pending. For more information on Autoscaling Lifecycle , please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html

You have deployed a Cloudformation template which is used to spin up resources in your account. Which of the following status in Cloudformation represents a failure. A. UPDATE_COMPLETE_CLEANUP_IN_PROGRESS B. DELETE_COMPLETE_WITH_ARTIFACTS C. ROLLBACK_IN_PROGRESS D. ROLLBACK_FAILED

C. ROLLBACK_IN_PROGRESS Answer - C AWS CloudFormation provisions and configures resources by making calls to the AWS services that are described in your template. After all the resources have been created, AWS CloudFormation reports that your stack has been created. You can then start using the resources in your stack. If stack creation fails, AWS CloudFormation rolls back your changes by deleting the resources that it created. The below snapshot from Cloudformation shows what happens when there is an error in the stack creation. For more information on how cloudformation works , please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-howdoesitwork.html

You are planning on using encrypted snapshots in the design of your AWS Infrastructure. Which of the following statements are true with regards to EBS Encryption A. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot creates an encrypted volume when specified / requested. B. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot creates an encrypted volume when specified / requested. C. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume. D. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot always creates an encrypted volume.

C. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume. Answer - C Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: Data at rest inside the volume All data moving between the volume and the instance All snapshots created from the volume Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted. For more information on EBS Encryption, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

If I want CloudFormation stack status updates to show up in a continuous delivery system in as close to real time as possible, how should I achieve this? A. Use a long-poll on the Resources object in your CloudFormation stack and display those state changes in the UI for the system. B. Use a long-poll on the ListStacksAPI call for your CloudFormation stack and display those state changes in the UI for the system. C. Subscribe your continuous delivery system to an SNS topic that you also tell your CloudFormation stack to publish events into. D. Subscribe your continuous delivery system to an SQS queue that you also tell your CloudFormation stack to publish events into.

C. Subscribe your continuous delivery system to an SNS topic that you also tell your CloudFormation stack to publish events into. Answer - C You can monitor the progress of a stack update by viewing the stack's events. The console's Events tab displays each major step in the creation and update of the stack sorted by the time of each event with latest events on top. The start of the stack update process is marked with an UPDATE_IN_PROGRESS event for the stack For more information on Monitoring your stack, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-monitor-stack.html

Which of the following features of the Elastic Beanstalk service will allow you to perform a Blue Green Deployment A. Rebuild Environment B. Swap Environment C. Swap URL's D. Environment Configuration

C. Swap URL's Answer - C With the Swap url feature , you can keep a version of your environment ready. And when you are ready to cut over , you can just use the swap url feature to switch over to your new environment. For more information on swap url feature, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

Your company wants to understand where cost is coming from in the company's production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time,how best can you give the business a good understanding of which applications cost the most per month to operate? A. Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill. B. Use custom CloudWatch Metrics in your system, and put a metric data point whenever cost is incurred. C. Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month. D. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.

C. Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month. Answer - C A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, an AWS-generated tag and user-defined tags. AWS defines, creates, and applies the AWS-generated tag for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report. For more information on Cost Allocation tags, please visit the below URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

Your CTO is very worried about the security of your AWS account. How best can you prevent hackers from completely hijacking your account? A. Use short but complex password on the root account and any administrators. B. Use AWS IAM Geo-Lock and disallow anyone from logging in except for in your city. C. Use MFA on all users and accounts, especially on the root account. D. Don't write down or remember the root account password after creating the AWS account.

C. Use MFA on all users and accounts, especially on the root account. Answer - C Multi-factor authentication can add one more layer of security to your AWS account. Even when you go to your Security Credentials dashboard one of the items is to enable MFA on your root account For more information on MFA, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html

Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 2000 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue? A. Your API Gateway deployment is throttling your requests. B. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization. C. You did not request a limit increase on concurrent Lambda function executions. D. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.

C. You did not request a limit increase on concurrent Lambda function executions. Answer - C Every Lambda function is allocated with a fixed amount of specific resources regardless of the memory allocation, and each function is allocated with a fixed amount of code storage per function and per account. By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000. For more information on Concurrent executions, please visit the below URL: http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html

Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened? A. You didn't choose the Development version of the AMI you are using. B. You didn't set the Development flag to true when deploying EC2 instances. C. You hit the soft limit of 5 EIPs per region and requested a 6th. D. You hit the soft limit of 2 VPCs per region and requested a 3rd.

C. You hit the soft limit of 5 EIPs per region and requested a 6th. Answer - C The most likely cause is the fact you have hit the maximum of 5 Elastic IP's per region. By default, all AWS accounts are limited to 5 Elastic IP addresses per region, because public (IPv4) Internet addresses are a scarce public resource. We strongly encourage you to use an Elastic IP address primarily for the ability to remap the address to another instance in the case of instance failure, and to use DNS hostnames for all other inter-node communication. Option A is invalid because a AMI does not have a Development version tag. Option B is invalid because there is no flag for an EC2 Instance Option D is invalid because there is a limit of 5 VPC's per region. For more information on Elastic IP's, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

You are building a Ruby on Rails application for internal, non-production use which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup? A. AWS CloudFormation B. AWS OpsWorks C. AWS ELB + EC2 with CLI Push D. AWS Elastic Beanstalk

D. AWS Elastic Beanstalk Answer - D With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring Elastic Beanstalk supports applications developed in Java, PHP, .NET, Node.js, Python, and Ruby, as well as different container types for each language. For more information on Elastic beanstalk, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

Which of the following services can be used to instrument Devops in your company. A. AWS Beanstalk B. AWS Opswork C. AWS Cloudformation D. All of the above

D. All of the above Answer - D All of the services can be used to instrument Devops in your company 1) AWS Elastic Beanstalk, an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on servers such as Apache, Nginx, Passenger, and IIS. 2) AWS OpsWorks, a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef 3) AWS CloudFormation, which is an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. For more information on AWS Devops please refer to the below link: http://docs.aws.amazon.com/devops/latest/gsg/welcome.html

When thinking of AWS Elastic Beanstalk, the 'Swap Environment URLs' feature most directly aids in what? A. Immutable Rolling Deployments B. Mutable Rolling Deployments C. Canary Deployments D. Blue-Green Deployments

D. Blue-Green Deployments Answer - D Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short period of time. It is possible to avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly. Blue/green deployments require that your environment runs independently of your production database, if your application uses one. If your environment has an Amazon RDS DB instance attached to it, the data will not transfer over to your second environment, and will be lost if you terminate the original environment. For more information on Blue Green deployments with Elastic beanstalk , please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

You are deciding on a deployment mechanism for your application. Which of the following deployment mechanisms provides the fastest rollback after failure. A. Rolling - Immutable B. Canary C. Rolling - Mutable D. Blue/Green

D. Blue/Green Answer - D In Blue Green Deployments , you will always have the previous version of your application available. So anytime there is an issue with a new deployment , you can just quickly switch back to the older version of your application. For more information on Blue Green Deployments, please refer to the below link: https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html

If you're trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure? A. Configure Rolling Deployments. B. Configure Enhanced Health Reporting C. Configure Blue-Green Deployments. D. Configure a Dead Letter Queue

D. Configure a Dead Letter Queue Answer - D Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter queue is a queue where other (source) queues can send messages that for some reason could not be successfully processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully processed messages. You can then analyze any messages sent to the dead letter queue to try to determine why they were not successfully processed. For more information on Elastic beanstalk and dead letter queues, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-deadletter

You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge? A. Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling. B. Submit a ticket to the AWS Forums. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two. C. Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment. D. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.

D. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda. Answer - D Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS CloudFormation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack. Use the AWS::CloudFormation::CustomResource or Custom::String resource type to define custom resources in your templates. Custom resources require one property: the service token, which specifies where AWS CloudFormation sends requests to, such as an Amazon SNS topic. For more information on Custom Resources in Cloudformation, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html

Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal? A. Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CloudFront uses will be fine. B. Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online. C. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region. D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.

D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs. Answer - D Failover routing lets you route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. The primary and secondary resource record sets can route traffic to anything from an Amazon S3 bucket that is configured as a website to a complex tree of records. For more information on Route53 Failover Routing, please visit the below URL: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

You have an application hosted in AWS , which sits on EC2 Instances behind an Elastic Load Balancer. You have added a new feature to your application and are now receiving complaints from users that the site has a slow response. Which of the below actions can you carry out to help you pinpoint the issue A. Use Cloudtrail to log all the API calls , and then traverse the log files to locate the issue B. Use Cloudwatch , monitor the CPU utilization to see the times when the CPU peaked C. Review the Elastic Load Balancer logs D. Create some custom Cloudwatch metrics which are pertinent to the key features of your application

D. Create some custom Cloudwatch metrics which are pertinent to the key features of your application Answer - D Since the issue is occuring after the new feature has been added , it could be relevant to the new feature. Enabling Cloudtrail will just monitor all the API calls of all services and will not benefit the cause. The monitoring of CPU utilization will just reverify that there is an issue but will not help pinpoint the issue. The Elastic Load Balancer logs will also just reverify that there is an issue but will not help pinpoint the issue. For more information on custom Cloudwatch metrics, please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

You have carried out a deployment using Elastic Beanstalk , but the application is unavailable. What could be the reason for this A. You need to configure ELB along with Elastic Beanstalk B. You need to configure Route53 along with Elastic Beanstalk C. There will always be a few seconds of downtime before the application is available D. The cooldown period is not properly configured for Elastic Beanstalk

D. The cooldown period is not properly configured for Elastic Beanstalk Answer - D The AWS Documentation mentions Because Elastic Beanstalk uses a drop-in upgrade process, there might be a few seconds of downtime. Use rolling deployments to minimize the effect of deployments on your production environments. For more information on troubleshooting Elastic Beanstalk, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/troubleshooting-deployments.html

What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds. A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead. B. Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC. C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput. D. Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.

D. Use a placement group for your instances so the instances are physically near each other in the same Availability Zone. Answer - D A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. For more information on Placement Groups, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html


संबंधित स्टडी सेट्स

Pathology chapter 6 end of chapter questions

View Set

Chapter 49 Oral Cavity and Esophageal Problems, Iggy 56/51, Chapter 50: Concepts of Care of Pts with stomach disorders

View Set

Causes of the American Revolution

View Set

7.2 Glycolysis- Splitting Glucose

View Set

Medical Terminology: Gastroenterology

View Set

نظام المرافعات الشرعية جزئية (الميد ١)

View Set