DevOps Pro - Test 1

Ace your homework & exams now with Quizwiz!

You have a multi-docker environment that you want to deploy to AWS. Which of the following configuration files can be used to deploy a set of Docker containers as an Elastic Beanstalk application? A. Dockerrun.aws.json B. .ebextensions C. Dockerrun.json D. Dockerfile

A. Dockerrun.aws.json Answer - A A Dockerrun.aws.json file is an Elastic Beanstalk-specific JSON file that describes how to deploy a set of Docker containers as an Elastic Beanstalk application. You can use aDockerrun.aws.json file for a multicontainer Docker environment. Dockerrun.aws.json describes the containers to deploy to each container instance in the environment as well as the data volumes to create on the host instance for the containers to mount. http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

You have an application hosted in AWS. This application was created using Cloudformation Templates and Autoscaling. Now your application has got a surge of users which is decreasing the performance of the application. As per your analysis, a change in the instance type to C3 would resolve the issue. Which of the below option can introduce this change while minimizing downtime for end users? A. Copy the old launch configuration, and create a new launch configuration with the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances. B. Update the launch configuration in the AWS CloudFormation template with the new C3 instance type. Add an UpdatePolicy attribute to the Auto Scaling group that specifies an AutoScalingRollingUpdate. Run a stack update with the updated template. C. Update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies an AutoScaling RollingUpdate in order to avoid downtime. D. Update the AWS CloudFormation template that contains the launch configuration with the new C3 instance type. Run a stack update with the updated template, and Auto Scaling will then update the instances one at a time with the new instance type.

B. Update the launch configuration in the AWS CloudFormation template with the new C3 instance type. Add an UpdatePolicy attribute to the Auto Scaling group that specifies an AutoScalingRollingUpdate. Run a stack update with the updated template. Answer - B Ensure first that the cloudformation template is updated with the new instance type. The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. Option A is invalid because this will cause an interruption to the users. Option C is partially correct , but it does not have all the steps as mentioned in option B. Option D is partially correct , but we need the AutoScalingRollingUpdate attribute to ensure a rolling update is peformed. For more information on AutoScaling Rolling updates please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

<p>You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, your company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive.</p> A. 3 servers in each of AZ's a through d, inclusive. B. 8 servers in each of AZ's a and b. C. 2 servers in each of AZ's a through e, inclusive. D. 4 servers in each of AZ's a through c, inclusive.

C. 2 servers in each of AZ's a through e, inclusive. Answer - C The best way is to distribute the instances across multiple AZ's to get the best and avoid a disaster scenario. With this scenario , you will always a minimum of more than 8 servers even if one AZ were to go down. Even though , A and D are also valid options , the best option when it comes to distribution is Option C. For more information on High Availability and Fault tolerance , please refer to the below link: https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_04.pdf

You work for an insurance company and are responsible for the day-to-day operations of your company's online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements: - All log entries must be retained by the system, even during unplanned instance failure. - The customer insight team requires immediate access to the logs from the past seven days. - The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available. How would you meet these requirements in a cost-effective manner? Choose three answers from the options below A. Configure your application to write logs to the instance's ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon S3 once an hour. B. Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3. C. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days. D. Configure your application to write logs to the instance's default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon S3 once an hour. E. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour. F. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume.

C. Create an Amazon S3 lifecycle configuration to move log files from Amazon S3 to Amazon Glacier after seven days. E. Configure your application to write logs to a separate Amazon EBS volume with the "delete on termination" field set to false. Create a script that moves the logs from the instance to Amazon S3 once an hour. F. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. The script uses the AWS API to identify any unattached Amazon EBS volumes containing log files. Your housekeeping script will mount the Amazon EBS volume, upload all logs to Amazon S3, and then delete the volume. Answer - C , E and F Since all logs need to be stored indefinitely, Glacier is the best option for this. One can use Lifecycle events to stream the data from S3 to Glacier Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on Lifecycle events , please refer to the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html You can use scripts to put the logs onto a new volume and then transfer those logs to S3.

You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this? A. Use CloudFormation templates B. Configuration Management Templates C. Saved Configurations D. Saved Templates

C. Saved Configurations Answer - C You can save your environment's configuration as an object in Amazon S3 that can be applied to other environments during environment creation, or applied to a running environment. Saved configurations are YAML formatted templates that define an environment's platform configuration, tier, configuration option settings, and tags. For more information on Saved Configurations please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration-savedconfig.html

You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What's the best design for this system, using DynamoDB? A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching. B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching. C. DynamoDB table with 100x higher read than write throughput, with ElastiCache caching. D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching.

D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching. Answer - D Because the 100x read ratio is mostly driven by a small subset, with caching, only a roughly equal number of reads to writes will miss the cache, since the supermajority will hit the top 1% scores. Knowing we need to set the values roughly equal when using caching, we select AWS ElastiCache, because CloudFront cannot directly cache DynamoDB queries, and ElastiCache is an excellent in-memory cache for database queries, rather than a distributed proxy cache for content delivery. For more information on DynamoDB table gudelines please refer to the below link: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html

You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? Choose two answers from the options given below. A. Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning. B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning. C. Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances. Use these credentials to securely access the Amazon S3 bucket when deploying code. D. Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application's Amazon EC2 instances with this role. E. Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis. F. Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon S3 bucket to your Amazon EC2 instances.

B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning. D. Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application's Amazon EC2 instances with this role. Answer - B and D You can add another layer of protection by enabling MFA Delete on a versioned bucket. Once you do so, you must provide your AWS account's access keys and a valid code from the account's MFA device in order to permanently delete an object version or suspend or reactivate versioning on the bucket. For more information on MFA please refer to the below link: https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/ IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles For more information on Roles for EC2 please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html Option A is invalid because this will not address either the integrity or security concern completely. Option C is invalid because user credentials should never be used in EC2 instances to access AWS resources. Option E and F are invalid because AWS Pipeline is an unnecessary overhead when you already have inbuilt controls to manager security for S3.

You need to deploy a Node.js application and do not have any experience with AWS. Which deployment method will be the simplest for you to deploy? A. AWS Elastic Beanstalk B. AWS CloudFormation C. AWS EC2 D. AWS OpsWorks

A. AWS Elastic Beanstalk Answer - A With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring For more information on Elastic beanstalk please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

You are doing a load testing exercise on your application hosted on AWS. While testing your Amazon RDS MySQL DB instance, you notice that when you hit 100% CPU utilization on it, your application becomes non- responsive. Your application is read-heavy. What are methods to scale your data tier to meet the application's needs? Choose three answers from the options given below A. Add Amazon RDS DB read replicas, and have your application direct read queries to them. B. Add your Amazon RDS DB instance to an Auto Scaling group and configure your CloudWatch metric based on CPU utilization. C. Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance. D. Use ElastiCache in front of your Amazon RDS DB to cache common queries. E. Shard your data set among multiple Amazon RDS DB instances. F. Enable Multi-AZ for your Amazon RDS DB instance.

A. Add Amazon RDS DB read replicas, and have your application direct read queries to them. D. Use ElastiCache in front of your Amazon RDS DB to cache common queries. E. Shard your data set among multiple Amazon RDS DB instances. Answer - A,D and E Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput For more information on Read Replica's please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/ Sharding is a common concept to split data across multiple tables in a database For more information on sharding please refer to the below link: https://forums.aws.amazon.com/thread.jspa?messageID=203052 Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases For more information on Elastic Cache please refer to the below link: https://aws.amazon.com/elasticache/ Option B is not an ideal way to scale a database Option C is not ideal to store the data which would go into a database because of the message size Option F is invalid because Multi-AZ feature is only a failover option

You need to monitor specific metrics from your application and send real-time alerts to your Devops Engineer. Which of the below services will fulfil this requirement. A. Amazon CloudWatch B. Amazon Simple Notification Service C. Amazon Simple Queue Service D. Amazon Simple Email Service

A. Amazon CloudWatch B. Amazon Simple Notification Service Answer - A and B Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For more information on AWS Cloudwatch , please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html Amazon CloudWatch uses Amazon SNS to send email. First, create and subscribe to an SNS topic. When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state For more information on AWS Cloudwatch and SNS , please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

You have decided you need to change the instance type of your instances in production which are running as part of an Autoscaling Group. You currently have 4 instances in production. You cannot have any interruption in service and need to ensure 2 instances are always running during the update. Which of the below options can be chosen for this? A. AutoScalingRollingUpdate B. AutoScalingScheduledAction C. AutoScalingReplacingUpdate D. AutoScalingIntegrationUpdate

A. AutoScalingRollingUpdate Answer - A The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on Autoscaling updates , please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

You have been tasked with deploying a scalable distributed system using AWS OpsWorks. Your distributed system is required to scale on demand. As it is distributed, each node must hold a configuration file that includes the hostnames of the other instances within the layer. How should you configure AWS OpsWorks to manage scaling this application dynamically? A. Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer. B. Update this configuration file by writing a script to poll the AWS OpsWorks service API for new instances. Configure your base AMI to execute this script on Operating System startup. C. Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to execute when instances are launched. D. Configure your AWS OpsWorks layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.

A. Create a Chef Recipe to update this configuration file, configure your AWS OpsWorks stack to use custom cookbooks, and assign this recipe to the Configure LifeCycle Event of the specific layer. Answer - A You can use the Configure Lifecycle event This event occurs on all of the stack's instances when one of the following occurs: An instance enters or leaves the online state. You associate an Elastic IP address with an instance or disassociate one from an instance. You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer. Ensure the Opswork layer uses a custom Cookbook For more information on Opswork stacks , please refer to the below document link: from AWS http://docs.aws.amazon.com/opsworks/latest/userguide/welcome_classic.html

You currently have the following setup in AWS 1) An Elastic Load Balancer 2) Autoscaling Group which launches EC Instances 3) AMIs with your code pre-installed You want to deploy the updates to your app to only a certain number of users. You want to have a cost-effective solution. You should also be able to revert back quickly. Which of the below solutions is the most feasible one? A. Create a second ELB, Auto Scaling. Create the AMI with the new app. Use a new launch configuration. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs. B. Create new AMIs with the new app. Then use the new EC2 instances in half proportion to the older instances. C. Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk versions. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs D. Create a full second stack of instances, cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.

A. Create a second ELB, Auto Scaling. Create the AMI with the new app. Use a new launch configuration. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs. Answer - A The Weighted Routing policy of Route53 can be used to direct a proportion of traffic to your application. The best option is to create a second ELB, attach the new Autoscaling Group and then use Route53 to divert the traffic. Option B is wrong because just having EC2 instances running with the new code will not help. Option C is wrong because Elastic beanstalk is good for development environments , and also there is no mention of having 2 environments where environment url's can be swapped. Option D is wrong because you still need Route53 to split the traffic. For more information on Route53 routing policies , please refer to the below link: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

You have an application running on Amazon EC2 in an Auto Scaling group. Instances are being bootstrapped dynamically, and the bootstrapping takes over 15 minutes to complete. You find that instances are reported by Auto Scaling as being In Service before bootstrapping has completed. You are receiving application alarms related to new instances before they have completed bootstrapping, which is causing confusion. You find the cause: your application monitoring tool is polling the Auto Scaling Service API for instances that are In Service, and creating alarms for new previously unknown instances. Which of the following will ensure that new instances are not added to your application monitoring tool before bootstrapping is completed? A. Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete. Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state. B. Use the default Amazon CloudWatch application metrics to monitor your application's health. Configure an Amazon SNS topic to send these CloudWatch alarms to the correct recipients. C. Tag all instances on launch to identify that they are in a pending state. Change your application monitoring tool to look for this tag before adding new instances, and the use the Amazon API to set the instance state to 'pending' until bootstrapping is complete. D. Increase the desired number of instances in your Auto Scaling group configuration to reduce the time it takes to bootstrap future instances.

A. Create an Auto Scaling group lifecycle hook to hold the instance in a pending: wait state until your bootstrapping is complete. Once bootstrapping is complete, notify Auto Scaling to complete the lifecycle hook and move the instance into a pending: complete state. Answer - A Auto Scaling lifecycle hooks enable you to perform custom actions as Auto Scaling launches or terminates instances. For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates. After you add lifecycle hooks to your Auto Scaling group, they work as follows: Auto Scaling responds to scale out events by launching instances and scale in events by terminating instances. Auto Scaling puts the instance into a wait state (Pending:Wait orTerminating:Wait). The instance remains in this state until either you tell Auto Scaling to continue or the timeout period ends. For more information on rolling updates, please visit the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html

You have been requested to use CloudFormation to maintain version control and achieve automation for the applications in your organization. How can you best use CloudFormation to keep everything agile and maintain multiple environments while keeping cost down? A. Create separate templates based on functionality, create nested stacks with CloudFormation. B. Use CloudFormation custom resources to handle dependencies between stacks C. Create multiple templates in one CloudFormation stack. D. Combine all resources into one template for version control and automation.

A. Create separate templates based on functionality, create nested stacks with CloudFormation. Answer - A As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines? A. Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. B. Use AWS Config to force the Staging and Production stacks to have configuration parity. Any differences will be detected for you so you are aware of risks. C. Use AMIs to ensure the whole machine, including the kernel of the virual machines, is consistent, since Docker uses Linux Container (LXC) technology, and we need to make sure the container environment is consistent. D. Use AWS ECS and Docker clustering. This will make sure that the AMIs and machine sizes are the same across both environments.

A. Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. Answer - A After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

Your company develops a variety of web applications using many platforms and programming languages with different application dependencies. Each application must be developed and deployed quickly and be highly evadable to satisfy your business requirements. Which of the following methods should you use to deploy these applications rapidly? A. Develop the applications in Docker containers, and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing. B. Use the AWS CloudFormation Docker import service to build and deploy the applications with high availability in multiple Availability Zones. C. Develop each application's code in DynamoDB, and then use hooks to deploy it to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing. D. Store each application's code in a Git repository, develop custom package repository managers for each application's dependencies, and deploy to AWS OpsWorks in multiple Availability Zones.

A. Develop the applications in Docker containers, and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing. Answer - A Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. For more information on Dockers and Elastic beanstalk please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below. A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process. Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics. B. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour. C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. D. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object in AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Redshift and run reports every hour.

A. Install an Amazon Cloudwatch Logs Agent on every web server during the bootstrap process. Create a CloudWatch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloudwatch custom metrics. C. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. Answer - A and C You can use the CloudWatch Logs agent installer on an existing EC2 instance to install and configure the CloudWatch Logs agent. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html You can publish your own metrics to CloudWatch using the AWS CLI or an API. For more information , please visit the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Most results come back in seconds. For more information on copying data from S3 to redshift, please refer to the below link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-copydata-redshift.html

If your application performs operations or workflows that take a long time to complete, what service can the Elastic Beanstalk environment do for you? A. Manages a Amazon SQS queue and running a daemon process on each instance B. Manages a Amazon SNS Topic and running a daemon process on each instance C. Manages Lambda functions and running a daemon process on each instance D. Manages the ELB and running a daemon process on each instance

A. Manages a Amazon SQS queue and running a daemon process on each instance Answer - A Elastic Beanstalk simplifies this process by managing the Amazon SQS queue and running a daemon process on each instance that reads from the queue for you. When the daemon pulls an item from the queue, it sends an HTTP POST request locally to http://localhost/ with the contents of the queue message in the body. All that your application needs to do is perform the long-running task in response to the POST. For more information Elastic Beanstalk managing worker environments, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

One of the instances in your Auto Scaling group health check returns the status of Impaired to Auto Scaling. What will Auto Scaling do in this case. A. Terminate the instance and launch a new instance B. Send an SNS notification C. Perform a health check until cool down before declaring that the instance has failed D. Wait for the instance to become healthy before sending traffic

A. Terminate the instance and launch a new instance Answer - A Auto Scaling periodically performs health checks on the instances in your Auto Scaling group and identifies any instances that are unhealthy. You can configure Auto Scaling to determine the health status of an instance using Amazon EC2 status checks, Elastic Load Balancing health checks, or custom health checks By default, Auto Scaling health checks use the results of the EC2 status checks to determine the health status of an instance. Auto Scaling marks an instance as unhealthy if its instance fails one or more of the status checks. For more information monitoring in Autoscaling , please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-monitoring-features.html

For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load? A. Terminating B. Detaching C. Terminating:Wait D. EnteringStandby

A. Terminating Answer - A When the scale-in happens, the first action is the Terminating action. For more information on Autoscaling Lifecycle , please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html

You have an application hosted in AWS. You wanted to ensure that when certain thresholds are reached , a Devops Engineer is notified. Choose 3 answers from the options given below A. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances B. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch C. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer. D. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.

A. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances C. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer. D. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold. Answer - A,C and D You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. For more information on Cloudwatch Logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html Amazon CloudWatch uses Amazon SNS to send email. First, create and subscribe to an SNS topic. When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state. For more information on Cloudwatch and SNS please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

You have a development team that is continuously spending a lot of time rolling back updates for an application. They work on changes , and if the change fails , they spend more than 5-6h in rolling back the update. Which of the below options can help reduce the time for rolling back application versions. A. Use Elastic Beanstalk and re-deploy using Application Versions B. Use S3 to store each version and then re-deploy with Elastic Beanstalk C. Use CloudFormation and update the stack with the previous template D. Use OpsWorks and re-deploy using rollback feature.

A. Use Elastic Beanstalk and re-deploy using Application Versions Answer - A Option B is invalid because Elastic Beanstalk already has the facility to manage various versions and you don't need to use S3 separately for this. Option B is invalid because in Cloudformation you will need to maintain the versions. Elastic Beanstalk can so that automatically for you. Option D is good for production scenarios and Elastic Beanstalk is great for development scenarios. AWS beanstalk is the perfect solution for developers to maintain application versions. With AWS Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and AWS Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. For more information on AWS Beanstalk please refer to the below link: https://aws.amazon.com/documentation/elastic-beanstalk/

You are using Elastic Beanstalk to manage your e-commerce store. The store is based on an open source e- commerce platform and is deployed across multiple instances in an Auto Scaling group. Your development team often creates new "extensions" for the e-commerce store. These extensions include PHP source code as well as an SQL upgrade script used to make any necessary updates to the database schema. You have noticed that some extension deployments fail due to an error when running the SQL upgrade script. After further investigation, you realize that this is because the SQL script is being executed on all of your Amazon EC2 instances. How would you ensure that the SQL script is only executed once per deployment regardless of how many Amazon EC2 instances are running at the time? A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true. B. Make use of the Amazon EC2 metadata service to query whether the instance is marked as the leader" in the Auto Scaling group. Only execute the script if "true" is returned. C. Use a "Solo Command" within an Elastic Beanstalk configuration file to execute the script. The Elastic Beanstalk service will ensure that the command is only executed once. D. Update the Amazon RDS security group to only allow write access from a single instance in the Auto Scaling group; that way, only one instance will successfully execute the script on the database.

A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true. Answer - A You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted. You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader-only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type. For more information on customizing containers, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this? A. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production. B. Create two templates, one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production. C. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production. D. Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.

A. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment is not production. Answer - A The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas. You might use conditions when you want to reuse a template that can create resources in different contexts, such as a test environment versus a production environment. In your template, you can add an EnvironmentType input parameter, which accepts either prod or test as inputs. For the production environment, you might include Amazon EC2 instances with certain capabilities; however, for the test environment, you want to use reduced capabilities to save money. With conditions, you can define which resources are created and how they're configured for each environment type. For more information on Cloudformation conditions please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html

The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? Choose two from the options below A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent. B. Using AWS CloudFormation and configuration management, set up remote logging to send events via UDP packets to CloudTrail. C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools. D. Using AWS CloudFormation, merge the application logs with the operating system logs, and use IAM Roles to allow both teams to have access to view console output from Amazon EC2.

A. Using AWS CloudFormation, create a CloudWatch Logs LogGroup and send the operating system and application logs of interest using the CloudWatch Logs Agent. C. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert these into Amazon CloudSearch or Amazon Redshift, depending on available analytic tools. Answer - A and C Option B is invalid because Cloudtrail is not designed specifically to take in UDP packets Option D is invalid because there are already Cloudwatch logs available, so there is no need to have specific logs designed for this. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs. For more information on Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html You can the use Kinesis to process those logs For more information on Amazon Kinesis please refer to the below link: http://docs.aws.amazon.com/streams/latest/dev/introduction.html

When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? Choose two answers from the options given below A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system. B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system. C. Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation. D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.

A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system. D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system. Answer - A and D There is a rich brand of CLI commands available for Ec2 Instances. The CLI is located in the following link: http://docs.aws.amazon.com/cli/latest/reference/ec2/ You can then use the describe instances command to describe the EC2 instances. If you specify one or more instance IDs, Amazon EC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results. http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html You can use the EC2 instances to get those instances which need to be removed from the configuration management system.

You have a set of EC2 instances hosted in AWS. You have created a role named DemoRole and assigned that role to a policy, but you are unable to use that role with an instance. Why is this the case. A. You need to create an instance profile and associate it with that specific role. B. You are not able to associate an IAM role with an instance C. You won't be able to use that role with an instance unless you also create a user and associate it with that specific role D. You won't be able to use that role with an instance unless you also create a user group and associate it with that specific role.

A. You need to create an instance profile and associate it with that specific role. Answer - A An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. Option B is invalid because you can associate a role with an instance Option C and D are invalid because using users or user groups is not a pre-requisite For more information on instance profiles , please visit the link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html

You are using Chef in your data center. Which service is designed to let the customer leverage existing Chef recipes in AWS? A. AWS Elastic Beanstalk B. AWS OpsWorks C. AWS CloudFormation D. Amazon Simple Workflow Service

B. AWS OpsWorks Answer - B AWS OpsWorks is a configuration management service that uses Chef, an automation platform that treats server configurations as code. OpsWorks uses Chef to automate how servers are configured, deployed, and managed across your Amazon Elastic Compute Cloud (Amazon EC2) instances or on-premises compute environments. OpsWorks has two offerings, AWS Opsworks for Chef Automate, and AWS OpsWorks Stacks. For more information on Opswork and SNS please refer to the below link: https://aws.amazon.com/opsworks/

As an architect you have decided to use CloudFormation instead of OpsWorks or Elastic Beanstalk for deploying the applications in your company. Unfortunately, you have discovered that there is a resource type that is not supported by CloudFormation. What can you do to get around this. A. Specify more mappings and separate your template into multiple templates by using nested stacks. B. Create a custom resource type using template developer, custom resource template, and CloudFormation. C. Specify the custom resource by separating your template into multiple templates by using nested stacks. D. Use a configuration management tool such as Chef, Puppet, or Ansible.

B. Create a custom resource type using template developer, custom resource template, and CloudFormation. Answer - B Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS CloudFormation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack. For more information on custom resources in Cloudformation please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html

You have a complex system that involves networking, IAM policies, and multiple, three-tier applications. You are still receiving requirements for the new system, so you don't yet know how many AWS components will be present in the final design. You want to start using AWS CloudFormation to define these AWS resources so that you can automate and version-control your infrastructure. How would you use AWS CloudFormation to provide agile new environments for your customers in a cost-effective, reliable manner? A. Manually create one template to encompass all the resources that you need for the system, so you only have a single template to version-control. B. Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation, and maintain several templates to version-control. C. Create multiple separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon Elastic Compute Cloud (EC2) instance running the SDK for finer granularity of control. D. Manually construct the networking layer using Amazon Virtual Private Cloud (VPC) because this does not change often, and then use AWS CloudFormation to define all other ephemeral resources.

B. Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation, and maintain several templates to version-control. Answer - B As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

You have an ELB setup in AWS with EC2 instances running behind them. You have been requested to monitor the incoming connections to the ELB. Which of the below options can suffice this requirement? A. Use AWS CloudTrail with your load balancer B. Enable access logs on the load balancer C. Use a CloudWatch Logs Agent D. Create a custom metric CloudWatch filter on your load balancer

B. Enable access logs on the load balancer Answer - B Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Option A is invalid because this service will monitor all AWS services Option C and D are invalid since ELB already provides a logging feature. For more information on ELB access logs , please refer to the below document link: from AWS http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner? A. Ensure that the I/O block sizes for the test are randomly selected. B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test. C. Ensure that snapshots of the Amazon EBS volumes are created as a backup. D. Ensure that the Amazon EBS volume is encrypted.

B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test. Answer - B During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume and any other EBS volumes attached to your instance New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable. Option A is invalid because block sizes are predetermined and should not be randomly selected. Option C is invalid because this is part of continuous integration and hence volumes can be destroyed after the test and hence there should not be snapshots created unnecessarily Option D is invalid because the encryption is a security feature and not part of load tests normally. For more information on EBS initialization please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html

You have an application which consists of EC2 instances in an Auto Scaling group. Between a particular time frame every day, there is an increase in traffic to your website. Hence users are complaining of a poor response time on the application. You have configured your Auto Scaling group to deploy one new EC2 instance when CPU utilization is greater than 60% for 2 consecutive periods of 5 minutes. What is the least cost-effective way to resolve this problem? A. Decrease the consecutive number of collection periods B. Increase the minimum number of instances in the Auto Scaling group C. Decrease the collection period to ten minutes D. Decrease the threshold CPU utilization percentage at which to deploy a new instance

B. Increase the minimum number of instances in the Auto Scaling group Answer - B If you increase the minimum number of instances , then they will be running even though the load is not high on the website. Hence you are incurring cost even though there is no need. All of the remaining options are possible options which can be used to increase the number of instances on a high load. For more information on On-demand scaling , please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

During metric analysis, your team has determined that the company's website is experiencing response times during peak hours that are higher than anticipated. You currently rely on Auto Scaling to make sure that you are scaling your environment during peak windows. How can you improve your Auto Scaling policy to reduce this high response time? Choose 2 answers. A. Push custom metrics to CloudWatch to monitor your CPU and network bandwidth from your servers, which will allow your Auto Scaling policy to have better fine-grain insight. B. Increase your Auto Scaling group's number of max servers. C. Create a script that runs and monitors your servers; when it detects an anomaly in load, it posts to an Amazon SNS topic that triggers Elastic Load Balancing to add more servers to the load balancer. D. Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed.

B. Increase your Auto Scaling group's number of max servers. D. Push custom metrics to CloudWatch for your application that include more detailed information about your web application, such as how many requests it is handling and how many are waiting to be processed. Answer - B and D Option B makes sense because maybe the max servers is low hence the application cannot handle the peak load. Option D helps in ensuring Autoscaling can scale the group on the right metrics. For more information on Autoscaling health checks , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

You use Amazon CloudWatch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a CloudWatch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? Choose three answers from the options given below A. Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk Cloudwatch metrics to capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric. B. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch. C. Use Amazon Simple Email Service to notify an on-call engineer when a CloudWatch alarm is triggered. D. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric. E. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered.

B. Install a CloudWatch Logs Agent on your servers to stream web application logs to CloudWatch. D. Create a CloudWatch Logs group and define metric filters that capture 500 Internal Server Errors. Set a CloudWatch alarm on that metric. E. Use Amazon Simple Notification Service to notify an on-call engineer when a CloudWatch alarm is triggered. Answer - B,D and E You can use CloudWatch Logs to monitor applications and systems using log data CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest For more information on Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html Amazon CloudWatch uses Amazon SNS to send email. First, create and subscribe to an SNS topic. When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state. For more information on SNS and Cloudwatch logs please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

You are administering a continuous integration application that polls version control for changes and then launches new Amazon EC2 instances for a full suite of build tests. What should you do to ensure the lowest overall cost while being able to run as many tests in parallel as possible? A. Perform syntax checking on the continuous integration system before launching a new Amazon EC2 instance for build test, unit and integration tests. B. Perform syntax and build tests on the continuous integration system before launching the new Amazon EC2 instance unit and integration tests. C. Perform all tests on the continuous integration system, using AWS OpsWorks for unit, integration, and build tests. D. Perform syntax checking on the continuous integration system before launching a new AWS Data Pipeline for coordinating the output of unit, integration, and build tests.

B. Perform syntax and build tests on the continuous integration system before launching the new Amazon EC2 instance unit and integration tests. Answer - B Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. Option A and D are invalid because you can do build tests on a CI system and not only Syntax tests. And Syntax tests are normally done during coding time and not during the build time. Option C is invalid because Opswork is ideally not used for build and integration tests. For an example of a Continuous integration system, please refer to the Jenkins system via the url below https://jenkins.io/

You are a DevOps engineer for a company. You have been requested to create a rolling deployment solution that is cost-effective with minimal downtime. How should you achieve this? Choose two answers from the options below A. Re-deploy your application using a CloudFormation template to deploy Elastic Beanstalk B. Re-deploy with a CloudFormation template, define update policies on Auto Scaling groups in your CloudFormation template C. Use update stack policies with CloudFormation to deploy new code D. After each stack is deployed, tear down the old stack

B. Re-deploy with a CloudFormation template, define update policies on Auto Scaling groups in your CloudFormation template C. Use update stack policies with CloudFormation to deploy new code Answer - B and C The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. Option A is invalid because it not efficient to use Cloudformation to use Elastic Beanstalk. Option D is invalid because this is an inefficient process to tear down stacks when there are stack policies available For more information on Autoscaling Rolling Updates please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

The project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production. How should you do this in a way that accommodates each department, using their existing workflows? A. Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networking and security groups and IAM information for Security. B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control. C. Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department's use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation. D. Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments to approve changes before altering the application for future deployments.

B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you control. Answer - B As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates. For more information on best practices for Cloudformation please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html

You have enabled Elastic Load Balancing HTTP health checking. After looking at the AWS Management Console, you see that all instances are passing health checks, but your customers are reporting that your site is not responding. What is the cause? A. The HTTP health checking system is misreporting due to latency in inter-instance metadata synchronization. B. The health check in place is not sufficiently evaluating the application function. C. The application is returning a positive health check too quickly for the AWS Management Console to respond. D. Latency in DNS resolution is interfering with Amazon EC2 metadata retrieval.

B. The health check in place is not sufficiently evaluating the application function. Answer - B You need to have a custom health check which will evaluate the application functionality. Its not enough using the normal health checks. If the application functionality does not work and if you don't have custom health checks , the instances will still be deemed as healthy. If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy. The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance For more information on Autoscaling health checks , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

What is web identity federation? A. Use of an identity provider like Google or Facebook to become an AWS IAM User. B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials. C. Use of AWS IAM User tokens to log in as a Google or Facebook user. D. Use of AWS STS Tokens to log in as a Google or Facebook user.

B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials. Answer - B With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. For more information on Web Identity federation please refer to the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

You need to implement A/B deployments for several multi-tier web applications. Each of them has Its Individual infrastructure: Amazon Elastic Compute Cloud (EC2) front-end servers, Amazon ElastiCache clusters, Amazon Simple Queue Service (SQS) queues, and Amazon Relational Database (RDS) Instances. Which combination of services would give you the ability to control traffic between different deployed versions of your application? A. Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed a-eating Elastic Beanstalk environments and using the Swap URLs feature. B. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route 53. C. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53. D. Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web application. New versions would be deployed updating the Elastic Beanstalk application version for the current Elastic Beanstalk environment.

B. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web application. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route 53. Answer - B This an example of Blue green deployment With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you're using Elastic Load Balancing For more information on Blue green deployment, please refer to the link: https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

When thinking of AWS Elastic Beanstalk's model, which is true? A. Applications have many deployments, deployments have many environments. B. Environments have many applications, applications have many deployments. C. Applications have many environments, environments have many deployments. D. Deployments have many environments, environments have many applications.

C. Applications have many environments, environments have many deployments. Answer - C The first step in using Elastic Beanstalk is to create an application, which represents your web application in AWS. In Elastic Beanstalk an application serves as a container for the environments that run your web app, and versions of your web app's source code, saved configurations, logs and other artifacts that you create while using Elastic Beanstalk. For more information on Applications, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications.html Deploying a new version of your application to an environment is typically a fairly quick process. The new source bundle is deployed to an instance and extracted, and then the web container or application server picks up the new version and restarts if necessary. During deployment, your application might still become unavailable to users for a few seconds. You can prevent this by configuring your environment to use rolling deployments to deploy the new version to instances in batches. For more information on deployment, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URI for the appropriate content. B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code. C. Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects. D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user's objects to that bucket. E. Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.

C. Authenticate your users at the application level, and use AWS Security Token Service (STS) to grant token-based authorization to S3 objects. E. Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket. Answer - C and E The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id. Option A is possible but then becomes a maintenance overhead because of the number of buckets. Option B is invalid because IAM users is not a good security practice. Option D is invalid because SMS tokens are not efficient for this requirement. For more information on the Security Token Service please refer to the below link: http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html

Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure? A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website. B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access. C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials. D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.

C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials. Answer - C With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. For more information on Web Identity Federation , please refer to the below document link: from AWS http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

Your application stores sensitive information on and EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below A. Unmount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume B. It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption. C. Copy an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted. D. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume

C. Copy an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted. D. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume Answer - C and D These steps are given in the AWS documentation To migrate data between encrypted and unencrypted volumes 1) Create your destination volume (encrypted or unencrypted, depending on your need). 2) Attach the destination volume to the instance that hosts the data to migrate. 3) Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at /mnt/destination and mount the destination volume there. 4) Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this. To encrypt a volume's data by means of snapshot copying 1) Create a snapshot of your unencrypted EBS volume. This snapshot is also unencrypted. 2) Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted. 3) Restore the encrypted snapshot to a new volume, which is also encrypted. For more information on EBS Encryption , please refer to the below document link: from AWS http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

You have a web application that's developed in Node.js The code is hosted in Git repository. You want to now deploy this application to AWS. Which of the below 2 options can fulfil this requirement. A. Create an Elastic Beanstalk application. Create a Docker file to install Node.js. Get the code from Git. Use the command "aws git.push" to deploy the application B. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Container resources type. With UserData, install Git to download the Node.js application and then set it up. C. Create a Docker file to install Node.js. and gets the code from Git. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk application. D. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Instance resource type and an AMI with Docker pre-installed. With UserData, install Git to download the Node.js application and then set it up.

C. Create a Docker file to install Node.js. and gets the code from Git. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk application. D. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Instance resource type and an AMI with Docker pre-installed. With UserData, install Git to download the Node.js application and then set it up. Answer - C and D Option A is invalid because there is no "aws git.push" command Option B is invalid because there is no AWS::EC2::Container resource type. Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. For more information on Docker and Elastic beanstalk please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). For more information on Ec2 User data please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

You have deployed an application to AWS which makes use of Autoscaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment? A. Use Elastic Beanstalk to deploy the new application with the new instance type B. Use Cloudformation to deploy the new application with the new instance type C. Create a new launch configuration with the new instance type D. Create new EC2 instances with the new instance type and attach it to the Autoscaling Group

C. Create a new launch configuration with the new instance type Answer - C The ideal way is to create a new launch configuration , attach it to the existing Auto Scaling group, and terminate the running instances. Option A is invalid because Elastic beanstalk cannot launch new instances on demand. Since the current scenario requires Autoscaling , this is not the ideal option Option B is invalid because this will be a maintenance overhead , since you just have an Autoscaling Group. There is no need to create a whole Cloudformation template for this. Option D is invalid because Autoscaling Group will still launch EC2 instances with the older launch configuration For more information on Autoscaling Launch configuration , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html

Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided to use a Blue/Green deployment strategy. How should you implement this for each deployment? A. Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to. B. Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance. C. Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing. D. Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2 instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

C. Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing. Answer - C 1) First create a new ELB which will be used to point to the new production changes. 2) Use the Weighted Route policy for Route53 to distribute the traffic to the 2 ELB's based on a 80-20% traffic scenario. This is the normal case , the % can be changed based on the requirement. 3) Finally when all changes have been tested , Route53 can be set to 100% for the new ELB. Option A is incorrect because this is a failover scenario and cannot be used for Blue green deployments. In Blue Green deployments, you need to have 2 environments running side by side. Option B is incorrect , because you need to a have a production stack with the changes which will run side by side. Option D is incorrect because this is not a blue green deployment scenario. You cannot control which users will go the new EC2 instances. For more information on blue green deployments , please refer to the below document link: from AWS https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

Management has reported an increase in the monthly bill from Amazon web services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost. How can you provide management with a better insight into data transfer use? A. Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies. B. Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer. C. Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points. D. Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.

C. Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points. Answer - C You can publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your published metrics with the AWS Management Console. CloudWatch stores data about a metric as a series of data points. Each data point has an associated time stamp. You can even publish an aggregated set of data points called a statistic set. If you have custom metrics specific to your application, you can give a breakdown to the management on the exact issue. Option A won't be sufficient to provide better insights. Option B is an overhead when you can make the application publish custom metrics Option D is invalid because just the ELB metrics will not give the entire picture For more information on custom metrics , please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html

You work for a very company that has multiple applications which are very different and built on different programming languages. How can you deploy applications as quickly as possible? A. Develop each app in one Docker container and deploy using ElasticBeanstalk B. Create a Lambda function deployment package consisting of code and any dependencies C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk D. Develop each app in a separate Docker containers and deploy using CloudFormation

C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk Answer - C Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. Option A is an efficient way to use Docker. The entire idea of Docker is that you have a separate environment for various applications. Option B is ideally used to running code and not packaging the applications and dependencies Option D is not ideal deploying Docker containers using Cloudformation For more information on Docker and Elastic Beanstalk, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

You currently run your infrastructure on Amazon EC2 instances behind an Auto Scaling group> All logs for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug. Which technique should you use to make sure you are able to review your logs after your instances have shut down? A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate. B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate. C. Install the CloudWatch Logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs. D. Install the CloudWatch monitoring agent on your AMI, and set up new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive.

C. Install the CloudWatch Logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs. Answer - C You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. Option A and B are invalid because Autoscaling policies are not designed for these purposes. Option D is invalid because you use Cloudwatch Logs Agent and not the monitoring agent. For more information on Cloudwatch logs , please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment. Which of the following should you do? A. Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (S3) bucket with encryption enabled. Assign AWS Identity and Access Management (IAM) users to each developer so they can download the credentials file. B. Place an internally created private key into a secure S3 bucket with server-side encryption using customer keys and configuration management, create a service account on all the instances using this private key, and assign IAM users to each developer so they can download the file. C. Place each developer's own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user's public keys into the appropriate account. D. Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.

C. Place each developer's own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user's public keys into the appropriate account. Answer - C An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. A private S3 bucket can be created for each developer, the keys can be stored in the bucket and then assigned to the instance profile. Option A and D are invalid , because the credentials should not be provided by a AWS EC2 Instance. Option B is invalid because you would not create a service account, instead you should create an instance profile. For more information on Instance profiles , please refer to the below document link: from AWS http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html

Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements. A. Publish your data to CloudWatch Logs, and configure your application to autoscale to handle the load on demand. B. Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is configured to pull down your log files stored an Amazon S3. C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is configured to process your logging data. D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is configured to process your logging data. Answer - C When you see Amazon Kinesis as an option, this becomes the ideal option to process data in real time. Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. For more information on Amazon Kinesis, please visit the below URL: https://aws.amazon.com/kinesis

After a daily scrum with your development teams, you've agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement? A. Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types. B. Using an AWS CloudFormation template, re-deploy your application behind a load balancer, launch a new AWS CloudFormation stack during each deployment, update your load balancer to send half your traffic to the new stack while you test, after verification update the load balancer to send 100% of traffic to the new stack, and then terminate the old stack. C. Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new identical Auto Scaling group, and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group. D. Using an AWS OpsWorks stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning, during deployment create a new version of your application, tell OpsWorks to launch the new version behind your load balancer, and when the new version is launched, terminate the old OpsWorks stack.

C. Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new identical Auto Scaling group, and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group. Answer - C This is given as a practice in the AWS Blue Green Deployment Guides A blue group carries the production load while a green group is staged and deployed with the new code. When it's time to deploy, you simply attach the green group to the existing load balancer to introduce traffic to the new environment. For HTTP/HTTPS listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state, For more information on Blue Green Deployments , please refer to the below document link: from AWS https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

You are responsible for your company's large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer. While reviewing metrics, you've started noticing an upwards trend for slow customer page load time. Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second. Which technique would you use to solve this issue? A. Re-deploy your infrastructure using an AWS CloudFormation template. Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed. B. Re-deploy your infrastructure using an AWS CloudFormation template. Spin up a second AWS CloudFormation stack. Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack. C. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scaling. Set up your Auto Scaling group policies to scale based on the number of requests per second as well as the current customer load time. D. Re-deploy your application using an Auto Scaling template. Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

C. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scaling. Set up your Auto Scaling group policies to scale based on the number of requests per second as well as the current customer load time. Answer - C Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases. Option A and B are invalid because Autoscaling is required to solve the issue to ensure the application can handle high traffic loads. Option D is invalid because there is no Autoscaling template. For more information on Autoscaling , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html

Your company releases new features with high frequency while demanding high application availability. As part of the application's A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working flawlessly after each deployment. If the logs show any anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner? A. Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour. B. Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour. C. Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner. D. Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.

C. Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner. Answer - C You can use Kinesis Streams for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight. The following are typical scenarios for using Kinesis Streams: Accelerated log and data feed intake and processing - You can have producers push data directly into a stream. For example, push system and application logs and they'll be available for processing in seconds. This prevents the log data from being lost if the front end or application server fails. Kinesis Streams provides accelerated data feed intake because you don't batch the data on the servers before you submit it for intake. Real-time metrics and reporting - You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data. For more information on Amazon Kinesis and SNS please refer to the below link: http://docs.aws.amazon.com/streams/latest/dev/introduction.html

You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs. What technique should you use to meet these requirements? A. Store your log in Amazon CloudWatch Logs. B. Store your logs in Amazon Glacier. C. Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier. D. Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.

C. Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier. Answer - C Option A is invalid , because cloudwatch will not store the logs indefinitely and secondly it won't be the cost effective option. Option B is invalid , because it won't server the purpose of regularly retrieve the most recent logs for troubleshooting. You will need to pay more to retrieve the logs faster from this storage. Option D is invalid , because it is not an ideal or cost effective option. You can define lifecycle configuration rules for objects that have a well-defined lifecycle. For example: If you are uploading periodic logs to your bucket, your application might need these logs for a week or a month after creation, and after that you might want to delete them. Some documents are frequently accessed for a limited period of time. After that, these documents are less frequently accessed. Over time, you might not need real-time access to these objects, but your organization or regulations might require you to archive them for a longer period and then optionally delete them later. You might also upload some types of data to Amazon S3 primarily for archival purposes, for example digital media archives, financial and healthcare records, raw genomics sequence data, long-term database backups, and data that must be retained for regulatory compliance. For more information on Lifecycle management please refer to the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

You have an Auto Scaling group with an Elastic Load Balancer. You decide to suspend the Auto Scaling AddToLoadBalancer for a short period of time. What will happen to the instances launched during the suspension period? A. The instances will be registered with ELB once the process has resumed B. Auto Scaling will not launch the instances during this period because of the suspension C. The instances will not be registered with ELB. You must manually register when the process is resumed D. It is not possible to suspend the AddToLoadBalancer process

C. The instances will not be registered with ELB. You must manually register when the process is resumed Answer - C If you suspend AddToLoadBalancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume the AddToLoadBalancer process, Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does not add the instances that were launched while this process was suspended. You must register those instances manually. For more information on the Suspension and Resumption process, please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

You have a current Clouformation template defines in AWS. You need to change the current alarm threshold defined in the Cloudwatch alarm. How can you achieve this? A. Currently there is no option to change what is already defined in Clouformation templates. B. Update the template and then update the stack with the new template. Automatically all resources will be changed in the stack. C. Update the template and then update the stack with the new template. Only those resources that need to be changed will be changed. All other resources which do not need to be changed will remain as they are. D. Delete the current clouformation template. Create a new one which will update the current resources.

C. Update the template and then update the stack with the new template. Only those resources that need to be changed will be changed. All other resources which do not need to be changed will remain as they are. Answer - C Option A is incorrect because Clouformation templates have the option to update resources. Option B is incorrect because only those resources that need to be changed as part of the stack update are actually updated. Option D is incorrect because deleting the stack is not the ideal option when you already have a change option available. When you need to make changes to a stack's settings or change its resources, you update the stack instead of deleting it and creating a new stack. For example, if you have a stack with an EC2 instance, you can update the stack to change the instance's AMI ID. When you update a stack, you submit changes, such as new input parameter values or an updated template. AWS CloudFormation compares the changes you submit with the current state of your stack and updates only the changed resources For more information on stack updates please refer to the below link: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html

You currently have an Auto Scaling group with an Elastic Load Balancer and need to phase out all instances and replace with a new instance type. What are 2 ways in which this can be achieved. A. Use NewestInstance to phase out all instances that use the previous configuration. B. Attach an additional ELB to your Auto Scaling configuration and phase in newer instances while removing older instances. C. Use OldestLaunchConfiguration to phase out all instances that use the previous configuration. D. Attach an additional Auto Scaling configuration behind the ELB and phase in newer instances while removing older instances.

C. Use OldestLaunchConfiguration to phase out all instances that use the previous configuration. D. Attach an additional Auto Scaling configuration behind the ELB and phase in newer instances while removing older instances. Answer - C and D When using the OldestLaunchConfiguration policy Auto Scaling terminates instances that have the oldest launch configuration. This policy is useful when you're updating a group and phasing out the instances from a previous configuration. For more information on Autoscaling instance termination, please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html Option D is an example of Blue Green Deployments. A blue group carries the production load while a green group is staged and deployed with the new code. When it's time to deploy, you simply attach the green group to the existing load balancer to introduce traffic to the new environment. For HTTP/HTTPS listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state, For more information on Blue Green Deployments , please refer to the below document link: from AWS https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

You are using Elastic Beanstalk to manage your application. You have a SQL script that needs to only be executed once per deployment no matter how many EC2 instances you have running. How can you do this? A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to false. B. Use Elastic Beanstalk version and a configuration file to execute the script, ensuring that the "leader only" flag is set to true. C. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true. D. Use a "leader command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "container only" flag is set to true.

C. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true. Answer - C You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted. You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader-only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type. For more information on customizing containers, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

You are using CloudFormation to launch an EC2 instance and then configure an application after the instance is launched. You need the stack creation of the ELB and Auto Scaling to wait until the EC2 instance is launched and configured properly. How do you do this? A. It is not possible for the stack creation to wait until one service is created and launched B. Use the WaitCondition resource to hold the creation of the other dependent resources C. Use a CreationPolicy to wait for the creation of the other dependent resources D. Use the HoldCondition resource to hold the creation of the other dependent resources

C. Use a CreationPolicy to wait for the creation of the other dependent resources Answer - C When you provision an Amazon EC2 instance in an AWS CloudFormation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a CreationPolicy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds. A CreationPolicy instructs CloudFormation to wait on an instance until CloudFormation receives the specified number of signals Option A is invalid because this is possible Option B is invalid because this is used make AWS CloudFormation pause the creation of a stack and wait for a signal before it continues to create the stack For more information on this , please visit the below URL: https://aws.amazon.com/blogs/devops/use-a-creationpolicy-to-wait-for-on-instance-configurations/

After reviewing the last quarter's monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost? A. Update your Amazon S3 buckets' lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application's bucket. B. Create a new DynamoDB table. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3. Any time a new object is uploaded, update the application's internal Amazon S3 object metadata cache from DynamoDB. C. Using Amazon SNS, create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB table. D. Upload all files to an ElastiCache file cache server. Update your application to now read all file metadata from the ElastiCache file cache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

C. Using Amazon SNS, create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB table. Answer - C Option A is an invalid option since Lifecycle policies are normally used for expiration of objects or archival of objects. Option B is partially correct where you store the data in DynamoDB, but then the number of GET requests would still be high if the entire DynamoDB table had to be traversed and each object compared and updated in S3. Option D is invalid because uploading all files to Elastic Cache is not an ideal solution. The best option is to have a notification which can then trigger an update to the application to update the DynamoDB table accordingly. For more information on SNS triggers and DynamoDB please refer to the below link: https://aws.amazon.com/blogs/compute/619/

You have an application running on an Amazon EC2 instance and you are using IAM roles to securely access AWS Service APIs. How can you configure your application running on that instance to retrieve the API keys for use with the AWS SDKs? A. When assigning an EC2 IAM role to your instance in the console, in the "Chosen SDK" drop-down list, select the SDK that you are using, and the instance will configure the correct SDK on launch with the API keys. B. Within your application code, make a GET request to the IAM Service API to retrieve credentials for your user. C. When using AWS SDKs and Amazon EC2 roles, you do not have to explicitly retrieve API keys, because the SDK handles retrieving them from the Amazon EC2 MetaData service. D. Within your application code, configure the AWS SDK to get the API keys from environment variables, because assigning an Amazon EC2 role stores keys in environment variables on launch.

C. When using AWS SDKs and Amazon EC2 roles, you do not have to explicitly retrieve API keys, because the SDK handles retrieving them from the Amazon EC2 MetaData service. Answer - C IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles For more information on Roles for EC2 please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

<p>You have just been assigned to take care of the Automated resources which have been setup by your company in AWS. You are looking at integrating some of the company's chef recipes to be used for the existing Opswork stacks already setup in AWS. By when you go to the recipes section, you cannot see the option to add any recipes. What could be the reason for this.</p> A. <p>Once you create a stack, you cannot assign custom recipe's, this needs to be done when the stack is created.</p> B. <p>Once you create layers in the stack, you cannot assign custom recipe's , this needs to be done when the layers are created.</p> C. <p>The stack layers were created without the custom cookbooks option. Just change the layer settings accordingly.</p> D. <p>The stacks were created without the custom cookbooks option. Just change the stack settings accordingly.</p>

D. <p>The stacks were created without the custom cookbooks option. Just change the stack settings accordingly.</p> Answer - D The AWS Documentation mentions the below To have a stack install and use custom cookbooks, you must configure the stack to enable custom cookbooks, if it is not already configured. You must then provide the repository URL and any related information such as a password. For more information on Custom cookbooks for Opswork , please visit the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable.html

You have just recently deployed an application on EC2 instances behind an ELB. After a couple of weeks, customers are complaining on receiving errors from the application. You want to diagnose the errors and are trying to get errors from the ELB access logs. But the ELB access logs are empty. What is the reason for this. A. You do not have the appropriate permissions to access the logs B. You do not have your CloudWatch metrics correctly configured C. ELB Access logs are only available for a maximum of one week. D. Access logging is an optional feature of Elastic Load Balancing that is disabled by default

D. Access logging is an optional feature of Elastic Load Balancing that is disabled by default Answer - D Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify. You can disable access logging at any time. For more information on ELB access logs , please refer to the below document link: from AWS http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

You are using a configuration management system to manage your Amazon EC2 instances. On your Amazon EC2 Instances, you want to store credentials for connecting to an Amazon RDS DB instance. How should you securely store these credentials? A. Give the Amazon EC2 instances an IAM role that allows read access to a private Amazon S3 bucket. Store a file with database credentials in the Amazon S3 bucket. Have your configuration management system pull the file from the bucket when it is needed. B. Launch an Amazon EC2 instance and use the configuration management system to bootstrap the instance with the Amazon RDS DB credentials. Create an AMI from this instance. C. Store the Amazon RDS DB credentials in Amazon EC2 user data. Import the credentials into the Instance on boot. D. Assign an IAM role to your Amazon EC2 instance, and use this IAM role to access the Amazon RDS DB from your Amazon EC2 instances.

D. Assign an IAM role to your Amazon EC2 instance, and use this IAM role to access the Amazon RDS DB from your Amazon EC2 instances. Answer - D You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be difficult to rotate and where users can potentially extract them). Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources. For more information on IAM Roles , please refer to the below document link: from AWS http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

You have an Auto Scaling group with 2 AZs. One AZ has 4 EC2 instances and the other has 3 EC2 instances. None of the instances are protected from scale in. Based on the default Auto Scaling termination policy what will happen? A. Auto Scaling selects an instance to terminate randomly B. Auto Scaling will terminate unprotected instances in the Availability Zone with the oldest launch configuration. C. Auto Scaling terminates which unprotected instances are closest to the next billing hour. D. Auto Scaling will select the AZ with 4 EC2 instances and terminate an instance.

D. Auto Scaling will select the AZ with 4 EC2 instances and terminate an instance. Answer - D The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. When using the default termination policy, Auto Scaling selects an instance to terminate as follows: Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, Auto Scaling selects the Availability Zone with the instances that use the oldest launch configuration. For more information on Autoscaling instance termination please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO? A. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues. B. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc MapReduce analysis and write new queries when needed. C. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues. D. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

D. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. Answer - D Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Amazon Elasticsearch Service is a fully managed service that delivers Elasticsearch's easy-to-use APIs and real-time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon CloudWatch so that you can go from raw data to actionable insights quickly. For more information on Elastic Search , please refer to the below link: https://aws.amazon.com/elasticsearch-service/

Your company has multiple applications running on AWS. Your company wants to develop a tool that notifies on-call teams immediately via email when an alarm is triggered in your environment. You have multiple on-call teams that work different shifts, and the tool should handle notifying the correct teams at the correct times. How should you implement this solution? A. Create an Amazon SNS topic and an Amazon SQS queue. Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic. Configure CloudWatch alarms to notify this topic when an alarm is triggered. Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0. Worker nodes in this group spawn when messages are added to the queue. Workers then use Amazon Simple Email Service to send messages to your on call teams. B. Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic. Notifications will be sent to on-call users when a CloudWatch alarm is triggered. C. Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on-call engineers receive alerts. D. Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.

D. Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift. Answer - D Option D fulfils all the requirements 1) First is to create a SNS topic for each group so that the required members get the email addresses. 2) Ensure the application uses the HTTPS endpoint and the SDK to publish messages Option A is invalid because the SQS service is not required. Option B and C are invalid because you just one topic , which will send the notification to everyone which will not meet the requirement For more information on setting up notifications , please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

<p>You have the following application to be setup in AWS <br>1) A web tier hosted on EC2 Instances <br>2) Session data to be written to DynamoDB <br>3) Log files to be written to Microsoft SQL Server </p><p>How can you allow an application to write data to a DynamoDB table?</p> A. Add an IAM user to a running EC2 instance. B. Add an IAM user that allows write access to the DynamoDB table. C. Create an IAM role that allows read access to the DynamoDB table. D. Create an IAM role that allows write access to the DynamoDB table.

D. Create an IAM role that allows write access to the DynamoDB table. Answer - D IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials For more information on IAM Roles please refer to the below link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

Which of these is not an intrinsic function in AWS CloudFormation? A. Fn::Equals B. Fn::If C. Fn::Not D. Fn::Parse

D. Fn::Parse Answer - D You can use intrinsic functions, such as Fn::If, Fn::Equals, and Fn::Not, to conditionally create stack resources. These conditions are evaluated based on input parameters that you declare when you create or update a stack. After you define all your conditions, you can associate them with resources or resource properties in the Resources and Outputs sections of a template. For more information on CloudFormation template functions, please refer to the URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html and http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-conditions.html

You've been tasked with improving the current deployment process by making it easier to deploy and reducing the time it takes. You have been tasked with creating a continuous integration (CI) pipeline that can build AMI's. Which of the below is the best manner to get this done. Assume that at max your development team will be deploying builds 5 times a week. A. Use a dedicated EC2 instance with an EBS Volume. Download and configure the code and then create an AMI out of that. B. Use OpsWorks to launch an EBS-backed instance, then use a recipe to bootstrap the instance, and then have the CI system use the CreateImage API call to make an AMI from it. C. Upload the code and dependencies to Amazon S3, launch an instance, download the package from Amazon S3, then create the AMI with the CreateSnapshot API call D. Have the CI system launch a new instance, then bootstrap the code and dependencies on that instance, and create an AMI using the CreateImage API call.

D. Have the CI system launch a new instance, then bootstrap the code and dependencies on that instance, and create an AMI using the CreateImage API call. Answer - D Since the number of calls is just a few times a week, there are many open source systems such as Jenkins which can be used as CI based systems. Jenkins can be used as an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project. For more information on the Jenkins CI tool please refer to the below link: https://jenkins.io/ Option A and C are partially correct, but since you just have 5 deployments per week, having separate instances which consume costs is not required. Option B is partially correct, but again having a separate system such as Opswork for such a low number of deployments is not required.

You have decided to migrate your application to the cloud. You cannot afford any downtime. You want to gradually migrate so that you can test the application with a small percentage of users and increase over time. Which of these options should you implement? A. Use Direct Connect to route traffic to the on-premise location. In DirectConnect , configure the amount of traffic to be routed to the on-premise location. B. Implement a Route 53 failover routing policy that sends traffic back to the on-premises application if the AWS application fails. C. Configure an Elastic Load Balancer to distribute the traffic between the on-premises application and the AWS application. D. Implement a Route 53 weighted routing policy that distributes the traffic between your on-premises application and the AWS application depending on weight.

D. Implement a Route 53 weighted routing policy that distributes the traffic between your on-premises application and the AWS application depending on weight. Answer - D Option A is incorrect because DirectConnect cannot control the flow of traffic. Option B is incorrect because you want to split the percentage of traffic. Failover will direct all of the traffic to the backup servers. Option C is incorrect because you cannot control the percentage distribution of traffic. Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. For more information on the Routing policy please refer to the below link: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

<p>You have an Auto Scaling group of Instances that processes messages from an Amazon Simple Queue Service (SQS) queue. The group scales on the size of the queue. Processing Involves calling a third-party web service. The web service is complaining about the number of failed and repeated calls it is receiving from you. You have noticed that when the group scales in, instances are being terminated while they are processing. What cost-effective solution can you use to reduce the number of incomplete process attempts?</p> A. Create a new Auto Scaling group with minimum and maximum of 2 and instances running web proxy software. Configure the VPC route table to route HTTP traffic to these web proxies. B. Modify the application running on the instances to enable termination protection while it processes a task and disable it when the processing is complete. C. Increase the minimum and maximum size for the Auto Scaling group, and change the scaling policies so they scale less dynamically. D. Modify the application running on the instances to put itself into an Auto Scaling Standby state while it processes a task and return itself to InService when the processing is complete.

D. Modify the application running on the instances to put itself into an Auto Scaling Standby state while it processes a task and return itself to InService when the processing is complete. Answer - D You can put the instances in a standby state , via the application , do the processing and then put the instance back in a state where it can be governed by the Autoscaling Group. For more information on the Autoscaling Group Lifecycle please refer to the below link: http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html

Which Auto Scaling process would be helpful when testing new instances before sending traffic to them, while still keeping them in your Auto Scaling group? A. Suspend the process AZRebalance B. Suspend the process HealthCheck C. Suspend the process ReplaceUnhealthy D. Suspend the process AddToLoadBalancer

D. Suspend the process AddToLoadBalancer Answer - D If you suspend AddToLoadBalancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume the AddToLoadBalancer process, Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does not add the instances that were launched while this process was suspended. You must register those instances manually. Option A is invalid because this just balances the number of EC2 instances in the group across the Availability Zones in the region Option B is invalid because this just checks the health of the instances. Auto Scaling marks an instance as unhealthy if Amazon EC2 or Elastic Load Balancing tells Auto Scaling that the instance is unhealthy. Option C is invalid because this process just terminates instances that are marked as unhealthy and later creates new instances to replace them. For more information on process suspension , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

You have an application running a specific process that is critical to the application's functionality, and have added the health check process to your Auto Scaling group. The instances are showing healthy but the application itself is not working as it should. What could be the issue with the health check , since it is still showing the instances as healthy. A. You do not have the time range in the health check properly configured B. It is not possible for a health check to monitor a process that involves the application C. The health check is not configured properly D. The health check is not checking the application process

D. The health check is not checking the application process Answer - D If you have custom health checks, you can send the information from your health checks to Auto Scaling so that Auto Scaling can use this information. For example, if you determine that an instance is not functioning as expected, you can set the health status of the instance to Unhealthy. The next time that Auto Scaling performs a health check on the instance, it will determine that the instance is unhealthy and then launch a replacement instance For more information on Autoscaling health checks , please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application clue to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 because of the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, although memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users? A. Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances. B. Sign into the AWS Management Console, and update the existing launch configuration with the new C3 instance type. Add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate. C. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Run a stack update with the new template. Auto Scaling will then update the instances with the new instance type. D. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Also add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate. Run a stack update with the new template.

D. Update the launch configuration specified in the AWS CloudFormation template with the new C3 instance type. Also add an UpdatePolicy attribute to your Auto Scaling group that specifies AutoScalingRollingUpdate. Run a stack update with the new template. Answer - D The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the CloudFormation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on rolling updates, please visit the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/

Your application uses CloudFormation to orchestrate your application's resources. During your testing phase before the application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting In the loss of test data. How should you prevent this from occurring in the future? A. Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type. B. Use an AWS CloudFormation stack policy to deny updates to the instance. Only allow UpdateStack permission to IAM principals that are denied SetStackPolicy. C. In the AWS CloudFormation template, set the AWS::RDS::DBInstance's DBlnstanceClass property to be read-only. D. Subscribe to the AWS CloudFormation notification "BeforeResourceUpdate," and call CancelStackUpdate if the resource identified is the Amazon RDS instance. E. In the AWS CloudFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance's DeletionPolicy property to "Retain."

E. In the AWS CloudFormation template, set the DeletionPolicy of the AWS::RDS::DBInstance's DeletionPolicy property to "Retain." Answer - E When you delete a stack, by default AWS CloudFormation deletes all stack resources so that you aren't left with any strays. This also means any data that you have stored in your stack are also deleted (unless you take manual snapshots). For example, data stored in Amazon EC2 volumes or Amazon RDS database instances are deleted. To retain a resource or to create a snapshot when a stack is deleted, specify a DeletionPolicy for the corresponding resource in your CloudFormation template You can specify retain with any resource, but you can only create snapshots of resources that support snapshots, such as the AWS::EC2::Volume, AWS::RDS::DBInstance, and AWS::Redshift::Cluster resources. For more information , please visit the link: https://aws.amazon.com/blogs/devops/delete-your-stacks-but-keep-your-data/


Related study sets

Ms. Robles-Suffix (end of words)---

View Set

Marketing Management Chapter Ten

View Set

The Three Branches of Government and their Powers

View Set

Chapter 15: Chemical Equilibrium

View Set

Missouri Constitution: Legislative Branch

View Set

Business Statistics Chapter 8 Quiz

View Set

Accounting for Managers - Ch. 8 PP

View Set

Chapter 3-Building Goodwill, ENC

View Set