DevOps Pro

Ace your homework & exams now with Quizwiz!

You are considering using wait conditions and creation policies in your CloudFormation templates. Which statement best describes the two?

A wait condition is itself a CloudFormation resource, while a creation policy is an attribute of a CloudFormation resource. Although they perform very similar functions, the wait condition is a resource, and the creation policy is an attribute of a resource. The creation policy is recommended for EC2 and Auto Scaling

Which AWS Service can warn us in advance of AWS Initiated maintenance?

AWS Personal Health Dashboard The Personal Health Dashboard provides alerts and guidance for AWS events that might affect your environment

We've implemented Inspector, Macie, and GuardDuty in our AWS account but need to consolidate all of the information in one view. What service can we use?

AWS Security Hub When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie

You are designing an event-driven architecture and need to create an event bus. Which AWS service would you use?

Amazon EventBridge For an event-driven architecture, EventBridge is designed "out of the box" to be able to create an event bus

An audit has revealed the need to identify and remediate security vulnerabilities in S3 bucket data. What service can you use?

Amazon Macie Macie applies machine learning and pattern matching techniques to S3 buckets to identify sensitive data

Your CI/CD pipeline generally runs well, but your manager would like a report of some CodeBuild metrics, such as how many builds were attempted, how many builds were successful, and how many builds failed in an AWS account over a period of time. How would you go about gathering the data for your manager?

CloudWatch metrics will report these metrics by default. You can view them in the CloudWatch console. These are default CloudWatch metrics that come with CodeBuild.

There have been issues with deploying applications to production in your company. You have been instructed to pause deployments for final acceptance testing before moving to production. Which technique will allow testing before production and require a manual deployment?

Continuous Delivery Continuous delivery ensures software is always ready to be deployed. This can be done by testing every change that is made. Because of the testing involved, the final deployment to a production environment is a manual deployment.

You run a load balanced, auto scaled website in EC2. Your CEO informs you that, during an upcoming public offering, your website must not go down, even if there is a Region failure. What's the best way to achieve this?

Deploy your load balancers and auto scaled website in two different regions. Create a Route 53 latency-based routing record. Point the record to each of your load balancers. A latency-based routing policy will keep your website as fast as possible for your customers, and will act as redundancy should one of the regions go down. Reference: Choosing a Routing Policy

Which AWS serverless service is best suited to be integrated with AWS versions of Docker and Kubernetes?

Fargate AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS)

Your company has a team of Windows software engineers that have recently switched from developing on-premises applications to cloud-native microservices. The service desk has been inundated with questions, but they can't troubleshoot because they don't know enough about Amazon CloudWatch. The questions they have been receiving mainly revolve around why EC2 logs don't appear in log groups and how they can monitor .NET applications. From the following, choose the options that will help troubleshoot the CloudWatch issues.

For each EC2 instance running .NET code, install the SSM Agent and attach the AmazonEC2RoleforSSM role. You will also need to create a resource group and IAM policy. To run Amazon CloudWatch Application Insights for .NET and SQL Server, we will need to install the SSM Agent with the correct roles, IAM policies, and resource groups. In the Amazon CloudWatch console, select Metrics > AWS Namespaces, and then your metrics should appear under CWAgent. As we are assuming defaults, CloudWatch metrics can be found under the CWAgent namespace. Check that the common-config.toml file exists and is configured correctly, and then run the following PowerShell command: amazon-CloudWatch-agent-ctl.ps1 -m ec2 -a start. Ensure the CloudWatch agent is running correctly by starting it using the amazon-CloudWatch-agent-ctl.ps1 script.

You have an idea regarding your AWS account security. You would like to monitor your account for any possible attacks against your resources (i.e., port scans or brute force SSH and RDP attacks). If anything is detected, you want the report pushed to a Slack channel, where anyone in your company can monitor and take action if it's their responsibility. How do you go about implementing this?

Implement Amazon GuardDuty. On detected events, trigger a Lambda function to post the information to your Slack channel. Amazon GuardDuty is the best way to implement this. It can trigger Lambda functions on events which can be used to post to a Slack channel. Reference: Amazon GuardDuty

What will an Auto Scaling group do when an InService instance is marked unhealthy?

It terminates that instance and launches a new one. When it determines that an InService instance is unhealthy, it terminates that instance and launches a new one.

You currently host a website from an EC2 instance. Your website is entirely static content. It contains images, static html files, and some video files. You would like to make the site as fast as possible for everyone around the world in the most cost-effective way. Which solution meets these requirements?

Move the website to an S3 bucket and serve it through Amazon CloudFront. S3 and CloudFront is the best solution which will ensure the fastest content delivery around the world and will be cheaper due to no ongoing EC2 costs.

Your AWS environment uses CloudWatch Events heavily, both through the management console and in code via APIs. What steps must you take, with the introduction of EventBridge?

None. You can continue to use CloudWatch Events.

You are deploying an application and need to address a requirement that calls for encryption of data at rest. Which of the following AWS services allow native encryption of data at rest?

S3. S3 allows the user to configure encryption at rest using either the AWS Key Management Service (KMS) or, in some cases, using customer-provided keys. Elastic File System (EFS). EFS allows the user to configure encryption at rest using either the AWS Key Management Service (KMS) or, in some cases, using customer-provided keys. Elastic Block Store (EBS). EBS allows the user to configure encryption at rest using either the AWS Key Management Service (KMS) or, in some cases, using customer-provided keys.

Your organization is building millions of IoT devices that will track temperature and humidity readings in offices around the world. The data is then used to make automated decisions about ideal air conditioning settings based on that data, and then trigger some API calls to control the units. Currently, the software to accept the IoT data and make the decisions and API calls runs across a fleet of autoscaled EC2 instances. After just a few thousand IoT devices in production, you're noticing the EC2 instances are beginning to struggle and there's just too many being spun up by your autoscaler. If this continues you're going to hit your account service limits and costs will be way more than your budget allows. How can you redesign this service to be more cost effective, more efficient, and most importantly, scalable?

Switch to Kinesis Data Analytics. Stream the IoT data with Kinesis Data Streams and perform your decision making and API triggering with Lambda. Shut down your EC2 fleet. In this instance, Kinesis Data Analytics and the data streamed with Kinesis Data Streams is the best choice. It's also completely serverless so you are able to save costs by shutting down your EC2 servers. Reference: Amazon Kinesis Data Analytics

You are leading a team of developers and DevOps engineers. Half of your team has extensive experience writing code in Ruby, and the other half has been writing CloudFormation templates in both YAML and JSON. Which team members would be better suited to ramp up quickly on the Lambda SAM framework?

The ones working in CloudFormation with JSON and YAML SAM templates are written in YAML. The developers working in CloudFormation would have a seamless transition to writing SAM templates.

You have created a pipeline in CodePipeline and have set up manual approvals. You want to be able to restrict who can approve the pipeline. What steps can you take?

Use the IAM Policy AWSCodePipelineApproverAccess, create a group of pipeline admin users, and attach the policy to the group. The pipeline admin users can be placed in a group and given the appropriate permissions to manually approve. All other users can be restricted from approving

You are using API Gateway and Lambda to create a chat application that requires real-time two-way communication. Which type of API should you use?

WebSocket API The WebSocket API can communicate with frontend services unsolicited (not just a response to a request)

Which AWS service would allow you to monitor all of the EC2 instances in your AWS account and provide a report on whether the instances use the allowed instance types?

AWS Config With AWS Config, you can create a rule of acceptable instance types, and Config will continuously monitor the instances in the account against this rule. The Config dashboard will have a report of findings. Findings could also be automated for faster notification

You have configured AWS Single Sign-On in your account, but now you want to enable on-premises, Active Directory users to have single sign-on into AWS. What tool can help with this?

AWS Directory Service using AD Connector AD Connector can be used to provide SSO access to Active Directory users.

You are extracting database connection strings from code, and you want to store the secrets securely as well as be able to perform automatic secret rotation. What AWS service can you use?

AWS Secrets Manager A feature of Secrets Manager that can make it the right choice over Parameter Store is automatic secret rotation.

A software company has developed a distributed application. They would like to be able to trace the application from end to end for the purposes of troubleshooting and optimization. Which AWS service would meet their needs?

AWS X-Ray X-Ray provides an end-to-end view of requests as they travel through your application

You have deployed a new version of your application using CodeDeploy, but you have decided in the future to run custom actions after the second target group serves traffic to the replacement task set. How can you do this?

Add the lifecycle hook action AfterAllowTraffic, and run the necessary tasks during this hook. AfterAllowTraffic is used to run tasks after the second target group serves traffic to the replacement task set

You need to configure a relational database with high availability across multiple Regions. What is your best option?

Amazon Aurora Global Database Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS Regions.

You need microsecond response times for your DynamoDB tables. What AWS service or component can you use?

Amazon DynamoDB Accelerator (DAX) DAX is an in-memory cache that delivers up to a 10 times performance improvement — from milliseconds to microseconds — even at millions of requests per second.

You are expiring items in a DynamoDB table using TTL, and you would like to archive those items to S3. What tools can you use?

Amazon DynamoDB Streams and AWS Lambda Streams can be used as a trigger by streaming expired items to a Lambda function that can archive the data in S3.

Which health checks can an Auto Scaling group use to determine the health of its instances?

Amazon EC2, Elastic Load Balancing, or a custom health check Instances are assumed to be healthy unless Amazon EC2 Auto Scaling receives notification that they are unhealthy, which would come from EC2, Elastic Load Balancing, or custom health checks.

Which AWS service can be used to continually monitor your AWS accounts and workloads for malicious activity?

Amazon GuardDuty GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation

Which AWS service can be used to provide security assessments of your applications' settings and configurations?

Amazon Inspector Inspector automatically discovers and scans Amazon EC2 instances and container images in ECR for software vulnerabilities and unintended network exposure

You currently work for a local government department which has cameras installed at all intersections with traffic lights around the city. The aim is to monitor traffic, reduce congestion if possible, and detect any traffic accidents. There will be some effort required to meet these requirements, as lots of video feeds will have to be monitored. You're thinking about implementing an application using Amazon Rekognition Video, a deep learning video analysis service, to meet this monitoring requirement. However, before you begin looking into Rekognition, which other AWS service is a key component of this application?

Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to capture, process, and store video streams which can then be used with Amazon Rekognition Video. Resource: Amazon Rekognition - Video - AWS

You need to create an event topic for an event-driven architecture. What AWS service would you use?

Amazon Simple Notification Service (SNS) SNS can be used in the architecture to send fan out notifications

What is the CodeDeploy term for a bundle of the current group of files you want to deploy?

Application revision An application revision contains a version of the source files CodeDeploy will deploy to your instances or scripts CodeDeploy will run on your instances

Your company has been developing its website and companion application for a year, and everything is currently deployed manually. You have EC2 servers and load balancers running your website frontend, and your backend is a database using RDS and some Lambda functions. Your manager thinks it's about time you work on some automation so time isn't wasted on managing AWS resources. She would also like the added benefit of being able to deploy your entire environment to a different Region for redundancy or disaster recovery purposes. Your RTO is an hour, and RPO is two hours. Which AWS services would you suggest using to achieve this?

Automatically run an AWS Step function to take an hourly snapshot, and then run a Lambda function that copies the snapshot to your disaster recovery Region. Use AWS CloudFormation StackSets to build your infrastructure in your production Region. Parameterize the stack set to use the snapshot ID to deploy RDS resources from snapshot. Use user data to deploy to your web servers from S3 on launch. In the case of an EC2 or RDS outage, manually update your public DNS to point to a maintenance static web page hosted in S3. Then, use the CloudFormation stack set to deploy to your disaster recovery Region, using the most recently-copied snapshot. Once the load balancer health checks pass, update Route 53 to point to your new load balancer. AWS CloudFormation Stack Sets will suit the requirements, using the same S3 bucket for source code, and recent RDS snapshots will allow an RPO of one hour.

What level or levels of Trusted Advisor support plans provide only the 7 core checks?

Basic and Developer The Basic and Developer plans provide only the 7 core checks

You are the only developer in your new team, and you are developing a PHP application with a Postgres database. You would like to just focus on your code and spend a minimal amount of time in the AWS Management Console. You would also like any new developer who joins your team to be able to deploy to the environment in a simple way, as they are traditional developers and not cloud engineers. It also needs to support rolling deployments as you grow, giving you the ability to deploy your software to existing servers without taking them all offline, as well as rolling back if required. Which AWS service do you choose to meet these needs, and how do you roll back if required?

Build and deploy your application using the AWS Elastic Beanstalk service. Deploy to your servers with rolling deployments, and initiate a rollback if required using manual redeploy. AWS Elastic Beanstalk is the best candidate for this requirement. You can quickly deploy code without having to learn about the infrastructure that runs those applications. Rolling deployments are possible, and a manual redeploy of the old version of your code will be required to roll back.

You have an application built on Docker. You would like to not only launch your Docker instances at scale but load balance them as well. Your director has also inquired as to whether it is possible to build and store the containers programmatically using AWS API calls in the future or if additional products are required. Which AWS services will enable you to store and run the containerized application, and what do you tell the director?

Build containers and store them using Amazon ECR (Elastic Container Registry), and run containers using Amazon ECS. Automate the container builds with AWS CodeBuild, pushing to ECR and using ECS APIs. Amazon ECS and ECR are the Amazon Elastic Container Service and Registry, which will meet all of the requirements specified. It also has an API, which can be used to implement your requirements.

Which CodeBuild component represents a combination of operating system, programming language runtime, and tools that CodeBuild uses to run a build?

Build environment A build environment contains a Docker image that provides a combination of operating system, programming language runtime, and tools that CodeBuild uses to run a build

Your CEO has come up with a brilliant idea for when your company migrates its first application to the cloud. He is a big fan of Infrastructure as Code and automation, and he thinks your deployments will never fail again if you spin up completely new servers, deploy your software to them, redirect your DNS to the new servers, and then shut down the old servers. Unfortunately for him, you know this has existed for a long time and had already planned to implement this in the coming weeks. Which build and deployment strategy do you present to him?

Build the data tier using Amazon RDS via CloudFormation. Deploy your web tier behind a load balancer, hosted on EC2 instances. During deployments, utilize a blue/green deployment strategy by deploying an entirely new load balancer and group of EC2 instances that run your new code base. Once the build and deployment of the fresh environment is complete, testing can be performed by accessing the site from the new load balancer URL. You can update your Route 53 simple routing policy to direct people from the old load balancer to another once your testing team completes their checks. Blue/green deployments are certainly what your CEO thinks he has invented. You can simply update your Route 53 alias record to point to a new load balancer once your testing team completes their checks.

Your company runs a popular website for selling cars and its userbase is growing quickly. It's currently sitting on on-premises hardware (e.g., IIS web servers and SQL Server backend). Your managers would like to make the final push into the Cloud. AWS has been chosen, and you need to make use of services that will scale well into the future. Your site is tracking all ad clicks that your customers purchase to sell their cars. The ad impressions must be then consumed by the internal billing system and then be pushed to an Amazon Redshift data warehouse for analysis. Which AWS services will help you get your website up and running in the Cloud, and will assist with the consumption and aggregation of data once you go live?

Build the website to run in stateless EC2 instances which autoscale with traffic, and migrate your databases into Amazon RDS. Push ad/referrer data using Amazon Kinesis Data Firehose to S3 where it can be consumed by the internal billing system to determine referral fees. Additionally create another Kinesis delivery stream to push the data to Amazon Redshift warehouse for analysis. Amazon Kinesis Data Firehose is used to reliably load streaming data into data lakes, data stores, and analytics tools like Amazon Redshift. Process the incoming data from Firehose with Kinesis Data Analytics to provide real-time dashboarding of website activity. Resource: Streaming Data Firehose - Amazon Kinesis - AWS Resource: Real-Time Web Analytics with Kinesis | AWS Solutions

Due to some recent performance issues, you have been asked to move your existing Product Information Management system to Amazon Aurora. The database uses the InnoDB storage engine, with new products being added regularly throughout the day. As the database is read heavy, the decision has also been made to add a read replica during the migration process. The changeover completes successfully, but after a few hours you notice that lag starts to appear between the read/write master and the read replica. What actions could you carry out to reduce this lag?

Change the read replica instance class to have the same storage size as the source instance. One of the most obvious causes of replication lag between two Aurora databases is because of settings and values, so making the storage size comparable between the source DB and the read replica is a good start to resolving the issue. Reference: Troubleshooting for Aurora Set query_cache_type to 0 in the DB parameter group to disable the query cache. Turning off the query cache is good for tables that are modified often, which causes lag, because the cache is locked and refreshed often. Reference: Troubleshooting for Aurora

After many years of running co-located servers, your company has decided to move their services into AWS. The prime reason for doing this is to scale elastically and to define their new infrastructure as code using CloudFormation. Your colleagues have been defining infrastructure in a test account, but now they are ready to deploy into production and they are identifying some problems. They have attempted to deploy the main CloudFormation template, but they are seeing the following errors; "Invalid Value or Unsupported Resource Property" and "Resource Failed to Stabilize." When these errors are encountered, the stack fails to roll back cleanly. What is the best way to troubleshoot these problems?

Check that resources exist when specified as parameters, that there is no maintenance being undertaken on any of the defined AWS services, and that the user deploying the CloudFormation stack has enough permissions to modify resources. Most of these answers are valid troubleshooting solutions for various CloudFormation issues, but this is the only answer that addresses the problems listed in the question. "Invalid Value or Unsupported Resource Property" errors appear only if there is a parameter naming mistake or the property names are unsupported. "Resource Failed to Stabilize" errors appear because a timeout is exceeded, or the AWS service isn't available or is interrupted. Finally, any update that fails to roll back could be because of a number of reasons — the most popular, however, is due to the deployment account having permissions to create stacks, but not to modify or delete stacks.

For the purposes of an audit, you need a report on API calls made in your AWS environment as well as who initiated the API call, when it was initiated, and from where. What service can you use to get this information?

CloudTrail Often a key term that can be associated with CloudTrail is "monitor" (or report on) API calls

You have a large amount of infrastructure, and monitoring has been neglected since it was provisioned. You want to monitor your AWS resources, on-premises resources, applications, and services. You would like to be able to retain your system and application logs, graph metrics, and be alerted to critical events. Which AWS services and features will most easily assist you in meeting this requirement?

CloudWatch metrics for graphing, CloudWatch Logs for logging, and CloudWatch alarms for alerts CloudWatch metrics are suitable for graphing, CloudWatch Logs will allow you to log both AWS and on-premises resources, and CloudWatch alarms will be suitable for alerts and notifications.

You are working as a developer and have cloned the main repository to your local machine. You are working on a particular Java file, made and tested changes, and are now ready to submit the Java file back to the main repository. What steps do you need to take?

Commit the code to your local repository, and then push the code back to the main repository. This is the proper sequence of steps. A commit is an operation on the local repository before pushing back the update to the main repository.

Your AWS account usage is getting out of hand. All of your developers are using their own accounts that have been set up to bill directly to the company credit card. It's difficult to track how everything is working together and to govern access to services, resources, and Regions across the accounts. How could you configure your account environment to work more efficiently, consolidate billing, implement access control, and take advantage of some pricing benefits from aggregated usage?

Configure AWS Organizations with consolidated billing, invite all the developer accounts, and implement organization-wide service control policies. Using AWS Organizations, you can invite all accounts and implement service control policies to govern access to services, resources, and Regions. Reference: Creating and Configuring an Organization

Your company's suite of web applications have just been overhauled to prevent some security issues and memory leaks that were slowing them all down significantly. It's working a lot more efficiently now, though your developers are still on the lookout for any network security issues the application might be leaving exposed. A weekly scan of ports reachable from outside the VPC would be beneficial. Which AWS service will you use to implement this with minimal additional overhead to the existing CI/CD process, and how will you configure it?

Configure Amazon Inspector Network Assessments Amazon Inspector is an automated vulnerability assessment service that will allow you to improve the security and compliance of your applications. A network configuration analysis checks for any ports reachable from outside the VPC. The agent is not required for this. Resource: Amazon Inspector - Amazon Web Services (AWS)

Your security-conscious CEO has a strange request: She would like an email every time someone signs in to your organization's AWS Management Console. What is the simplest way to implement this?

Configure a CloudWatch event for the AWS Management Console sign-in service, and set the target to an SNS topic subscription the CEO is subscribed to. The AWS Management Console sign-in is a valid event source in CloudWatch Events and EventBridge. This would be the simplest way to implement this requirement.

Your company runs a fleet of Spot instances to process large amounts of medical data that can be killed at any time. The processing results are extremely important, and you find that sometimes the servers shut down too quickly for the logs to be copied over to S3 when a batch has completed. It's okay if a batch doesn't complete processing because it will be re-run on another Spot instance, but the small case of completing and not getting the logs for that completion is causing headaches. Which is the best solution to implement to fix this issue?

Configure the CloudWatch Logs agent on the AMI for the Spot instance and stream your logs to CloudWatch Logs directly The CloudWatch Logs agent will stream your logs straight into CloudWatch Logs so nothing will be missed if the server is terminated.

Your organization is currently using CodeDeploy to deploy your application to 20 EC2 servers that sit behind a load balancer. It's making use of the CodeDeployDefault.OneAtATime deployment configuration. Your manager has decided to speed up deployment by deploying to as many servers as possible at once, as long as at least five of them remain in service at any one time. How do you achieve this, ensuring that it will scale?

Create a custom deployment configuration, specifying the minimum healthy host as a number and set it to 5. Setting the minimum healthy host as a number and 5 will work as desired. A percentage set to 25% will work for the current 20 servers, but it will not scale down if any servers are removed.

Your organization has been using CodePipeline to deploy software for a few months now, and it has been running smoothly for the majority of releases. However, when something breaks during the build process, it requires lots of work-hours to determine what the problem is, roll back the deployment, and fix it. This is frustrating for both management and your customers. Your supervisor would like to assign one developer to test that the build works successfully before your CodePipeline proceeds to the deploy stage so you don't encounter this issue again. How would you implement this?

Create a test deploy stage as well as a manual approval stage in CodePipeline. Once the assigned developer checks the test deployment worked, they can authorize the pipeline to continue and deploy into production. CodePipeline allows for manual approval steps to be implemented for exactly this reason.

Your team has developed a web application in the Go programming language. You are now in charge of deploying this application to AWS, and the application is hosted in a Git repository. Which two are the best options to accomplish this task?

Create an AWS CloudFormation template that creates an instance with the AWS::EC2::Instance resource type and an AMI with Docker pre-installed. With UserData, install Git to download the Go application and then set it up. An AMI with Docker pre-installed is a great choice, and as an added bonus, will speed the installation. Write a Dockerfile that installs the Go image and grabs the application using Git. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk application. This solution uses Elastic Beanstalk instead of CloudFormation. Knowing the options available and what best fits your teams skillset is crucial.

You are building ACatGuru, which your organization describes as Facebook for cats. As part of the sign-up process, your users need to upload a full-size profile image. While you already have these photos being stored in S3, you would like to also create thumbnails of the same image that will be used throughout the site. How will you automate this process using AWS resources?

Create an S3 Event Notification to execute a Lambda function when an object is created. The function will create the thumbnail from the source image and store it in a different S3 bucket. S3 Event Notification triggering Lambda to create thumbnails is a perfect example of how to automate this process.

You currently run an auto scaled application which is database read-heavy. Due to this, you are making use of a read replica for all application reads. It's currently running at around 60% load with your user base, but you expect your company growth to double the amount of users every 6 months. You need to forward plan for this and determine methods to ensure the database doesn't become a bottleneck while still maintaining some redundancy. What is the best way to approach this issue?

Create more read replicas. Use a Route 53 weighted routing policy to ensure the load is spread across your read replicas evenly. More read replicas will ease the load on your current ones, and load balancing them with a weighted routing policy will ensure they're not a single point of failure for your application. Reference: How Can I Distribute Read Requests across Multiple Amazon RDS Read Replicas?

Your CEO loves serverless. He won't stop talking about how your entire company is built on serverless architecture. Now, it's up to you to actually build the web application he's been talking about for six months. Which AWS services do you look at using to both create the application and orchestrate your components?

Create your application using AWS Lambda for compute functions within the application. Data storage can be provided using Amazon DynamoDB for a NoSQL database. File storage can be provided using Amazon S3. AWS Step Functions can orchestrate workflows, and AWS Glacier will allow you to archive old files cheaply. AWS Lambda is the correct choice for compute in a serverless environment, as is DynamoDB for NoSQL databases and S3 for storage. AWS Step Functions are used to orchestrate your components, such as your Lambda functions, and AWS Glacier is great for file archiving.

You have been asked to deploy a high performance computing (HPC) application on a small number of EC2 instances. The application must be placed within a single Availability Zone, as well as have high network throughput and low-latency network performance for tightly coupled node-to-node communication. Which of the following options would provide this solution?

Deploy the EC2 servers in a cluster placement group. Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. They are also recommended when the majority of the network traffic is between the instances in the group.

Before your DevOps engineer left the organization, they configured CodeDeploy to use blue/green deployment in a pipeline in CodePipeline. This has worked brilliantly in the past, but tighter budgets have unfortunately caused management to direct your department to only run one set of servers and to update everything at the same time. What is the simplest way to revert to this deployment method?

Edit your CodeDeploy application deployment group, and change the deployment type from blue/green to in-place. You can edit the application deployment group in CodeDeploy, and change the deployment type from blue/green to in-place.

Your CFO wants to know when your estimated AWS charges go over $10,000. Immediately. Without delay. How do you configure this?

Enable billing alerts. Use the default CloudWatch EstimatedCharges metric, and create an alarm when it exceeds $10,000. Set the alarm to notify an SNS queue your CFO is subscribed to. After enabling billing alerts, you will be able to use the CloudWatch EstimatedCharges metric to track your estimated AWS charges. Creating an alarm to notify your CFO via SNS is also a good idea.

You have an upcoming audit on our AWS account and need to be able to prove your CloudTrail logs have not been tampered with. How can you do this?

Enable log file integrity validation. To enable log file integrity validation with the CloudTrail console, select the Enable log file validation option when you create or update a trail. By default, this feature is enabled for new trails

You manage a team of developers who currently push all of their code into AWS CodeCommit, and then CodePipeline automatically builds and deploys the application. You think it would be useful if everyone received an email when a pull request is created or updated. How would you achieve this in the simplest way?

Enable notifications in the CodeCommit console. Select Pull request update events as the event type, and choose or create a new SNS topic for the notifications. Subscribe everyone to the SNS topic. Notifications in the CodeCommit console is the simplest way to implement this requirement. Notifications only trigger when someone pushes to a repository, not when a pull request is created.

One of your colleagues has been asked to investigate increasing the performance of the main corporate website for customers in the Asia Pacific Region using a CDN. They have decided to use CloudFront to perform this function, but have encountered problems when configuring it. You can see that CloudFront returns an InvalidViewerCertificate error in the console whenever they attempt to to add an alternate domain name. What can you suggest to your colleague to assist them in solving the issue?

Ensure that there is a trusted and valid certificate attached to your distribution. You must ensure that the certificate you wish to associate with your alternate domain name is from a trusted CA, has a valid date, and is formatted correctly. Reference: Troubleshooting Distribution Issues There was a temporary, internal issue with CloudFront which meant it couldn't validate certificates. It may be that there was an internal CloudFront HTTP 500 being generated at the time of configuration, which should be transient and will resolve if you try again. Reference: Troubleshooting Distribution Issues

EventBridge rules match events to targets. What can be used to help perform this match?

Event patterns Rules use event patterns to select events and send them to targets

You want to begin using the serverless event bus in AWS called EventBridge. What key elements of EventBridge must you understand and work with?

Event, rule, target An event indicates a change in an environment. A rule matches incoming events and sends them to targets for processing

Your production website with over 10,000 viewers a day has been ridiculed because your SSL certificate expired, prompting a warning to be shown to every visitor over a period of 8 hours until you woke up and renewed the certificate. Your competitors jumped on this opportunity to claim your company doesn't care about it's customers, and you've seen a 20% drop in new signups since then. Obviously, management is furious. What can you do to ensure this never happens again?

Generate new certificates to use with AWS Certificate Manager, and let it handle auto-renewal. You will have to generate new certificates to use with AWS Certificate Manager, and it will handle auto-renewal. Reference: AWS Certificate Manager Features

An application team has been using Elastic Beanstalk and deploying their application to an Auto Scaling group using rolling deployments. The team has determined that issues caused when a rolling deployment fails, such as a partially deployed application, are unacceptable. They've also decided it is all right to double the capacity during the deployment, including an entirely new Auto Scaling group. There needs to be only one version running (no mixed deployments). What type of deployment should they use?

Immutable Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched

With your company moving more internal services into AWS, your colleagues have started to complain about using different credentials to access different applications. Your team has started to plan to implement AWS SSO, connected to the corporate Active Directory system, but are struggling to implement a working solution. Which of the following are NOT valid troubleshooting steps to confirm that SSO is enabled and working?

Implement AWS Organizations and deploy AWS Managed Microsoft AD in two separate accounts. It does not matter which Regions they are deployed in. AWS Organizations and the AWS Managed Microsoft AD must be in the same account and the same Region. Reference: AWS SSO Prerequisites Reference: Troubleshooting AWS SSO Issues AWS SSO with Active Directory only allows authentication using the DOMAIN\UserName format. You can use the DOMAIN\UserName or the User Principal Name (UPN) format to authenticate with AD, but you can't use the UPN format if you have two-step verification and Context-aware verification enabled. Reference: AWS SSO Prerequisites Reference: Troubleshooting AWS SSO Issues To allow authentication using a User Principal Name, enable two-step verification in Context-aware verification mode. You can use the User Principal Name (UPN) or the DOMAIN\UserName format to authenticate with AD, but you can't use the UPN format if you have two-step verification and Context-aware verification enabled. Reference: AWS SSO Prerequisites Reference: Troubleshooting AWS SSO Issues

Your fleet of Windows EC2 instances are running well. They're automatically built from an AMI and generally don't require much interaction, except for when they need to be patched with the latest Windows updates or have new software installed or updated. Ideally, you'd like to automate the process to install or update applications and apply Windows updates and patches during the quiet hours when your customers are sleeping, which means around 3 a.m. How would you best automate this process?

Implement AWS Systems Manager Patch Manager and AWS Systems Manager Maintenance Windows. Patch Manager combined with Maintenance Windows in AWS Systems Manager is the recommended way to automate this requirement. Reference: Patching Schedules Using Maintenance Windows

The worldwide cat news powerhouse, Meow Jones, has hired you as a DevOps database consultant. They're currently using legacy in-house PostgreSQL databases. It costs a considerable amount to maintain the server fleet, as well as operational costs for staff, and further hardware costs for scaling as the industry grows. You are tasked with finding an AWS solution that will meet their requirements: high throughput, push-button compute scaling, storage auto scaling and low-latency read replicas. Any kind of automatic monitoring and repair of databases instances will also be appreciated. Which AWS service(s) would you suggest?

Implement Amazon Aurora, using AWS Database Migration Service to migrate the databases. Amazon Aurora will fit the needs perfectly and the AWS Database Migration Service can assist with the migration. Reference: Amazon Aurora Features: PostgreSQL-Compatible Edition

You work for a medical imaging company, dealing with X-rays, MRIs, CT scans, and so on. The images and other related patient reports and documents are stored in various S3 buckets in the US-West Region. Your organization is very security conscious and wants to ensure that while the S3 buckets are locked down, there's no other way the documents are being shared internally or externally, other than the approved methods already in place. Audits are also important, so whatever methods of data protection are in place, they must work together as well as provide actionable alerts if there are any observed issues. How do you best achieve this?

Implement Amazon Macie across your S3 buckets. Amazon Macie is a security service that uses machine learning to discover personally identifiable information in your S3 buckets. It also provides you with dashboards and alerts that show how your private data is being accessed. Reference: What Is Amazon Macie?

You run a cat video website, and have an EC2 instance running a webserver that serves out the video files to your visitors who watch them. On the same EC2 instance, you also perform some transcoding to make the files smaller or to change their resolution based on what the website visitor is requesting. You have found that as your visitors grow, this is getting harder to scale. You would like to add more web servers and some autoscaling, as well as moving the transcoding to its own autoscaling group, so it autoscales up when you have a lot of videos to convert. You are also running out of disk space fairly frequently and would like to move to a storage system that will also scale with the least amount of effort and change in the way your entire system works. What do you do?

Implement EFS (Elastic File System). Mount your volume to each server for serving content and transcoding. This is a great case for using EFS. With minimal effort you can move your cat video website to an automatically scaling storage solution which can be used by all of your EC2 instances.

Your manager has asked you to investigate different deployment types for your application. You currently use Elastic Beanstalk, so you are restricted to what that service offers. Your manager thinks it would be best to only deploy to a few machines at a time — so if there are any failures, only the few that have been updated will be affected, making a rollback easier. To start with, your manager would like the roll-outs to occur on 2 machines at a time but would like the option to increase that to 3 at a time in the future. Deployments are done during quiet periods for your application, so reduced capacity is not an issue. Which deployment type should you implement, and how can you get the most reliable indication that deployments have not introduced errors to the servers?

Implement Elastic Beanstalk's rolling deployment policy. Within the rolling update policy, set it to rolling based on health with a batch size of 2. Use Elastic Beanstalk's enhanced health reporting to analyze web logs and operating system metrics within the target servers, as this will give the best guarantee that a deployment was successful. The rolling deployment policy based on health with a batch size of 2 is the best solution, and using Elastic Beanstalk's enhanced health reporting to analyze web logs and operating system metrics within the target servers will give the best guarantee that a deployment was successful.

You have multiple teams of developers, and at the moment they all have the ability to start and stop any EC2 instance they can see in the EC2 console, which is all of them. You would really like to implement some security measures so they can only start and stop the instances based on their cost center. What AWS features would you use to achieve this?

Implement tags and restrict access by comparing the ec2:ResourceTag/CostCenter and the ${aws:PrincipalTag/CostCenter} in a policy attached to your developer role and seeing if they match. You can simplify user permissions to resources by using tags and policies attached to roles. aws:PrincipalTag is a tag that exists on the user or role making the call, and ec2:ResourceTag is a tag that exists on an EC2 resource. In this case, we want the CostCenter tag on the resource to match the CostCenter tag assigned to the developer.

The marketing team of Mega-Widgets Inc. has recently released an advertising campaign in the APAC region, in addition to their home market of Europe. The CTO has told you that they have started receiving complaints from Australian customers that the site located in eu-west-1 is slow, and they think by utilizing the existing disaster recovery infrastructure in ap-southeast-2, they can speed up response time for these customers. As time is of the essence, they have asked you to assist. You have already confirmed that data is synchronizing between both sites. What is the quickest way to reduce the latency for these customers?

In Route 53, create two records for www.mega-widgets.com: a latency record pointing to the European IP and a latency record pointing to the Asia Pacific IP. As you are not told otherwise, you can assume that the European site is fast for people within Europe and therefore you can assume a high latency is the cause of the problem. Route 53 latency-based routing sounds like the perfect candidate when utilizing the disaster recovery site as the main site for Asia Pacific. CloudFront could offer some help if you only had one site and the Edge locations could cache some content, but in this case the CTO wanted to use the disaster recovery infrastructure already in place. Reference: Amazon Route 53 FAQs Reference Amazon Route 53 Tutorials

Your application is being built with CodeBuild. You would like your artifacts to be stored with the region they were built in (and you would like to add this information to the filename) so you can easily locate them later. The CodeBuild documentation refers to an environment variable called $(AWS_REGION) that you could use. How and where will you implement this?

In the artifacts section of the buildspec.yml file. Specify name: build-$(AWS_REGION). You can specify the artifact name in the artifacts section of the buildspec.yml file using: name: build-$(AWS_REGION).

When an Auto Scaling group is scaling out, and a lifecycle hook is applied, what final state will an instance ultimately reach?

InService During scale out and application of a lifecycle hook, the instance will proceed through Pending:Wait, Pending:Proceed, and finally InService.

Your company develops an online shopping platform and would like to implement a way to recommend products to customers based on previous products they have looked at. To use this you want to record their click-stream, or the sequence of links they've clicked on as they navigate your website. At any one time, you can have thousands of users using your shopping platform. Which architecture is the best option to meet your requirements?

Ingest data with Kinesis Data Streams. Group user requests with Kinesis Data Analytics. Process and store the data with Lambda, Kinesis Firehose, and S3. Ingesting is done with Kinesis Data Streams, grouping user requests into sessions is done with Kinesis Data Analytics. Data processing and storage is done with Lambda, Firehose, and S3. Read the attached link and the high-level solution overview for more information. Reference: Create real-time click-stream sessions and run analytics with Amazon Kinesis Data Analytics, AWS Glue, and Amazon Athena

We have EC2 instances in our CloudFormation stack, and they are causing stack creation to fail. What can we do to get detailed logging information for the EC2 instances?

Install the CloudWatch Logs Agent on the EC2 instances. With the CloudWatch Logs agent installed, logging data can be captured and streamed to CloudWatch for review

Your chief security officer would like some assistance in producing a graph of failed logins to your Linux servers, located in your own data centers and EC2. The graph would then be used to trigger alert emails to investigate the failed attempts once it crosses a certain threshold. What would be your suggested method of producing this graph in the easiest way?

Install the CloudWatch Logs agent on all Linux servers, stream your logs to CloudWatch Logs, and create a CloudWatch Logs metric filter. Create the graph on your dashboard using the metric. The easiest way to produce these graphs would be to ensure you are streaming your logs to CloudWatch Logs, create a metric filter, and then visualize the graph on your dashboard.

You have a new website design you would like to test with a small subset of your users. If the test is successful, you would like to increase the amount of users accessing the new site to half your users. If that's successful and your infrastructure is able to scale up correctly, you would like to completely roll over to the new design and then decommission the servers hosting the old design. Which of these methods should you choose?

Install the new website design in a new Auto Scaling group. Use a weighted routing policy in Route 53 and use it to choose the percentage of users you would like during different testing phases. Start with 5%, then 50%, and end with 100% of traffic going to the new Auto Scaling group if tests are successful. Decommission the old EC2 servers. A weighted routing policy combined with an Auto Scaling group will meet your requirements and will continue to scale if your tests are successful and you completely roll over to the new design. Reference: Weighted Routing

If you need to process streaming data in real time, what is the most common AWS service for this use case?

Kinesis When the requirement is to process streaming data in real time, Kinesis must be strongly considered. "Real time" usually points to Kinesis

Which capability of Amazon Kinesis would you use to load data streams into Amazon Redshift for further analysis?

Kinesis Data Firehose Kinesis Data Firehose can load real-time streams into data lakes, warehouses, and analytics services, such as Amazon Redshift and S3

A security company would like to stream video data in real time for machine learning analysis and facial recognition. What capability can be used to stream the video?

Kinesis Video Streams Kinesis Video Streams can stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing

Your organization currently runs all of its applications on in-house virtual machines. Your CEO likes the direction AWS is taking the cloud industry and has suggested you look into what it would take to migrate your servers to the cloud. All of your servers are built and configured automatically using Chef. Which AWS services will you transition your first batch of servers to AWS in the FASTEST possible way while maintaining a centralized configuration management?

Leverage the AWS Server Migration Service to migrate your instances across as AMIs into AWS. Use a custom configuration script in the Server Migration Service console to register the instances to Chef. While AWS OpsWorks allows you to use your own custom Chef cookbooks with your AWS resources, it is not possible to simply import your existing servers into OpsWorks. The fastest solution here is to use the AWS Server Migration Service to bring your existing Chef-managed machines into EC2 and to manage them with Chef the same way you have been on your in-house system.

You've been given some security action items from an audit. Which security red flag would come from Trusted Advisor?

MFA not set on the root account This is one of the 7 core checks provided in all support plans

An application team is using Elastic Beanstalk with an RDS database. The dev environments and the databases are short term and can quickly be re-created if necessary. But when the application is moved to production, it will be long term. What step can they take with the RDS database to ensure its durability?

Make sure the RDS database is created outside of the Elastic Beanstalk environment. This will decouple the database from Elastic Beanstalk. If the Elastic Beanstalk environment is deleted, the RDS database will remain

Your developers are currently storing their code in a private GitHub repository, however your organization has recently introduced rules that everything must live within your AWS environment, so you need to find an alternative. What do you suggest that will both continue to work with your existing developer environments but also support a possible future transition into a CI/CD environment?

Move your repositories to CodeCommit. CodeCommit is the best solution because it's compatible with CI/CD supporting services, such as CodeBuild and CodePipeline.

Your organization has been using AWS for 12 months. Currently, you store all of your custom metrics in CloudWatch. Per company policy, you must retain your metrics for three years before it is okay to discard or delete them. Is CloudWatch suitable?

No, CloudWatch only retains metrics for 15 months. You will have to use the API to pull all metrics and store them somewhere else. CloudWatch will retain metrics for 15 months, after which they will expire. If you need to keep metrics for longer, you must pull them out of CloudWatch using their API and store them in a database somewhere else.

Your accounting team is a bit concerned about your AWS bill. It has been growing steadily over the past 18 months, and they would like to know if there's any way you could look into lowering it, either by optimizing the services you are using or removing things that aren't required anymore. You already have a Business Support plan with AWS, so you could make use of that to see where you could save some money. What would be the best immediate course of action?

Open your Trusted Advisor dashboard, and look at the cost optimization suggestions. AWS accounts with a Business Support plan have full access to all Trusted Advisor recommendations, including cost optimization. This would be the best thing to look at first. Reference: Trusted Advisor

Your organization would like to implement auto scaling of your servers, so you can spin up new servers during times of high demand and remove servers during the quiet times. Your application load mainly comes from memory usage, so you have chosen that as your scaling metric. What do you have to do next?

Publish a custom memory utilization metric to CloudWatch, as there isn't one by default. Memory utilization is not a default CloudWatch metric, so you will have to publish a custom metric first.

Your customers have recently reported that your Java web application stops working sometimes. Your developers have researched the issue and noticed there appears to be a memory leak that causes the software to eventually crash. They have fixed the issue, but your CEO wants to ensure it never happens again. Which method could help you detect future leaks so you're able to fix the issue before the application stops?

Push your memory usage to a custom CloudWatch metric, and set it to alert your developers if it crosses a certain threshold. Pushing the custom CloudWatch metric is a good idea that will ensure developers see the alert.

You have been tasked with implementing resource tagging throughout the company AWS account. What are the common tagging strategies to help identify and manage AWS resources?

Resource organization, cost allocation, automation, access control

You are working with CloudFormation, and the rollback of a stack has failed. What should you check to diagnose the problem?

Run drift detection. A stack that has drift can encounter issues during rollback

Your attempt to delete a CloudFormation stack has failed because of one resource. What step can you take to delete the stack?

Set the RetainResource parameter for the offending resource, and then delete the stack. This will bypass the problem resource and will allow the other resources, and ultimately the stack, to be deleted

You are part of a development team that has decided to compile release notes directly out of a CodeCommit repository, the version control system in use. This step is to be automated as much as possible. Standard GitFlow is used as the branching model with a fortnightly production deploy at the end of a sprint and occasional hotfixes. Select the best approach.

Set up a CloudWatch Events or EventBridge rule to match CodeCommit repository events of type CodeCommit Repository State Change. Look for referenceCreated events with a tag of referenceType that are created when a production release is tagged after a merge into master. In a Lambda function, use the CodeCommit API to retrieve that release commit message and store it in a static website hosting enabled S3 bucket. Following GitFlow's standard release procedures, a release branch is merged into master. That commit on master must be tagged for easy future reference to this historical version. Both release and hotfix branches are temporary branches and would require ongoing updates of the CodeCommit trigger. Feature branches are used to develop new features for the upcoming or a distant future release and might be discarded (e.g., in case of a disappointing experiment). CodeCommit does not provide a generate release notes feature.

You are working on a new version of a Lambda function and would like to test this version on a small subset of users before introducing the new version to all users. What steps can you take?

Set up a canary deployment for AWS Lambda. Create a Lambda alias pointed to the new version. Set a weighted alias value for this alias as 5%. Canary deployments can be used to test an application with a small group of users and then promote the application version to production after completion of testing

You need to develop a CloudFormation template that allows the user to specify if the environment created is a dev or prod environment. Based on that decision, the template needs to create appropriately sized resources (larger servers for prod). How can you implement this in CloudFormation?

Set up a parameter in the template that provides a dropdown for the user to select dev or prod. Use a condition function based on the environment selection to create appropriately sized servers. Based on the parameter selection, the condition function will direct the creation of appropriately sized resources for either a dev or a prod environment.

Your manager wants to implement a CI/CD pipeline for your new cloud-native project using AWS services and would like you to ensure it is performing the best automated tests that it can. She would like fast and cheap testing, where bugs can be fixed quickly. She suggests starting with individual units of your software and wants you to test each one, ensuring they perform how they are designed to perform. What kind of tests do you suggest implementing, and what part of your CI/CD pipeline will you implement them with?

Start by creating a code repository in CodeCommit for your software team to perform source control. Build some unit tests for the existing code base, and ensure your developers produce unit tests as early as possible for software as it is built. Implement the execution of unit testing using CodeBuild. Unit tests are built to test individual units of your software and quickly identify bugs. These can be implemented with AWS CodeBuild.

You need to create a process to run multiple Lambda functions, and some of those functions need to run in parallel. You also have a few points where manual approval is required. What service can be used for this use case?

Step Functions Step Functions can manage running Lambda functions in parallel as well as stringing them together in a series. Additionally, Step Functions can integrate business processes that require manual intervention

You have a web application in development that accesses a MySQL database in Amazon RDS. Your organization wants to securely store the credentials to the database so only the developers can retrieve them. They would also like the ability to rotate the passwords on a schedule. How would you best store the credentials to ensure security and make sure no one else in your organization has access to them?

Store the passwords in AWS Secrets Manager, and use an AWS IAM policy to control the access permissions so only members of the developer group can access the credentials. Ensure your web application also has permission to use this IAM policy. AWS Secrets Manager can use IAM policies to restrict access as well as having the ability to automatically rotate passwords on a schedule. Reference: AWS Secrets Manager

You have determined that many of the EC2 instances in your organization need patching. You want to do that patching overnight at the least disruptive time. What AWS capability can you use?

Systems Manager Maintenance Windows Systems Manager Maintenance Windows can be used to schedule patches at the most appropriate times

You want to remove sensitive information, such as database passwords, from code. What tool can you use to do this at no cost?

Systems Manager Parameter Store You can use Parameter Store, at no additional cost, to store secrets

You are managing large fleets of EC2 instances, and you would like to be able to automate common configuration tasks on EC2 instances. What AWS tool can you use to do this?

Systems Manager Run Command Systems Manager Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.

Due to the design of your application, your EC2 servers aren't treated as cattle, as advised in the cloud world, but as pets. As such, you need DNS entries for each of them. Managing each DNS entry is taking a long time, especially when you have lots of servers — some of which may last a day, a week, or a month. You don't want your Route 53 records to be messy, and you would prefer some kind of automation to add and remove them. Which method would you choose to solve this in the best way?

Tag your instance with the DNS record required. Deploy a Lambda function which can add or remove DNS records in Route 53 based on the DNS tag. Use a CloudWatch Events rule to monitor when an instance is started or stopped and trigger the Lambda function. Tagging your instance with the required DNS record is a great way to help you automate the creation of Route 53 records. A Lambda function can be triggered from a CloudWatch Events EC2 start/stop event and can add and remove the Route 53 records on your behalf. This will meet your requirements and automate the creation and cleanup of DNS records. Reference: Building a Dynamic DNS for Route 53 Using CloudWatch Events and Lambda

Before instances are terminated in an Auto Scaling group, you want to retrieve log files from the instance. What lifecycle hook state will pause termination and allow time to retrieve the log files?

Terminating:Wait A lifecycle hook can put an instance in a Terminating:Wait state, which will allow time for log file retrieval.

You have decided to install the AWS Systems Manager agent on both your on-premises servers and your EC2 servers. This means you will be able to conveniently centralize your auditing, access control, and provide a consistent and secure way to remotely manage your hybrid workloads. This also results in all your servers appearing in your EC2 console, not just the servers hosted on EC2. How are you able to tell them apart in the console?

The ID of the hybrid instances are prefixed with 'mi-'. The ID of the EC2 instances are prefixed with 'i-'. Hybrid instances with the Systems Manager agent installed and registered to your AWS account will appear with the 'mi-' prefix in EC2.

Your team has been working with CloudFormation for a while and have become quite proficient at deploying and updating your stack and their associated resources. One morning, however, you notice that your RDS MySQL database is completely empty. You track this down to a simple port change request that came through, asking if the default MySQL port could be changed for security reasons. What do you suspect happened?

The Port attribute for AWS::RDS::DBInstance has the update requirement of Replacement. The database was replaced due to the AWS::RDS::DBInstance Port attribute update requirement of Replacement, so the database will have to be restored from backups.

You need to create an AppSpec file for a new deployment. You have several code editors and would like to use the one best suited for this task. What is the format for AppSpec files?

The application specification file (AppSpec file) is a YAML- or JSON-formatted file used by CodeDeploy to manage a deployment.

Your development team has grown from 2 developers to 7 developers and is expected to grow to double the current size in the next 6 months. Developers are already having issues with file conflicts and developers needing to work on the same file at the same time. What techniques can you use in CodeCommit to allow developers to work on an isolated copy of the repository locally for specific tasks (such as feature development and bug fixes) and then resubmit files back to the main repository while avoiding conflicts?

The developers can create a branch for feature development (or bug fixes), and then perform a merge back to the main repository to avoid conflicts.

You are preparing to build your Java source code using CodeBuild. You have decided to package the buildspec.yml file with your source code. What requirements must you meet to package the buildspec with the source code?

The file must be named buildspec.yml and placed in the root of your source directory. If you include a buildspec as part of the source code, by default, the buildspec file must be named buildspec.yml and placed in the root of your source directory.

You have configured AWS Organizations for a large company with multiple AWS accounts. Who pays for usage incurred by users under an AWS member account in the organization?

The management account owner The owner of the management account is responsible for paying for all usage, data, and resources used by the accounts in the organization

You have set up a new CodeCommit repository for your company's development team. All but one member can connect to it from an SSH command-line terminal on their local machines, and you are now troubleshooting the problem for that user. What are some of the possible reasons for this not working, and how can you resolve the issue?

The user might have previously configured his local computer to use the credential helper for CodeCommit. In this case, edit the .gitconfig file to remove the credential helper information from the file before using Git credentials. On macOS, you might need to clear cached credentials from Keychain Access. The simplest and easiest connection method to use with CodeCommit is HTTPS connections and Git credentials. It is supported by most IDEs and development tools. However, the SSH protocol is also supported. If you want to use SSH connections for your repository, you can connect to CodeCommit without installing the AWS CLI.

You are managing OpsWorks stacks and need to be sure that your instances can scale based on demand. What automatic instance scaling options are available with OpsWorks Stacks?

Time-based scaling and load-based scaling With automatic load-based scaling, you can set thresholds for CPU, memory, or load to define when additional instances will be started. With automatic time-based scaling, you can define at what time of the day instances will be started and stopped

Your organization has multiple weather stations installed around your state. You would like to move the data from your old relational database to a newer and hopefully faster NoSQL database. Which AWS solution would you choose for storing and migrating the data, and which keys should you use for your weather station ID and the timestamp of the reading?

Use AWS Database Migration Service to migrate data to an Amazon DynamoDB table. Select your RDS instance as the source and DynamoDB as the destination. Create a task with an object-mapping rule to copy the required relational database tables to DynamoDB. Within your object-mapping, set your weather station ID as the partition key of your table and timestamp of the reading as the sort key. DynamoDB is the AWS NoSQL database. The weather station ID will be your partition key and the timestamp will be your sort key. Reference: Choosing the Right DynamoDB Partition Key

You need to quickly test a proof of concept that your boss has given you. She's given you a ZIP file containing a PHP web application. You want to get it running in an AWS environment as fast as possible; however, there's also a dependency on a library that must be installed as well. The library is available from a Yum/APT repository. Which service do you choose, and how do you ensure dependencies are installed?

Use AWS Elastic Beanstalk for deployment, and install dependencies with .ebextensions. AWS Elastic Beanstalk is a quick way to test the proof of concept without the need to configure the web server. Required libraries can be installed quickly and automatically using .ebextensions.

ADogGuru is developing a new Node.js application that will require some servers, a MySQL database, and a load balancer. You would like to deploy and maintain this in the easiest way possible, as well as have the ability to configure it with basic configuration management code, which is a new concept for ADogGuru. You also need to allow non-technical people to deploy software to servers and for managers to administer access. A key component of this setup will also be the ability to roll back deployments, should you need to. Which service(s) and configuration management system will you choose to achieve this with the LEAST operational overhead?

Use AWS OpsWorks to create an application using pre-built layer templates to create your servers, MySQL RDS instances, and load balancer. Use recipes running in Chef Solo for configuration management. Grant your non-technical staff the Deploy permission level and the administrator the Manage permission level. OpsWorks can achieve all of the stated requirements, as deployments are possible with a few clicks in the UI and can be rolled back. It also supports Chef Solo as a built-in configuration management system, which is the recommended solution for those not already using Puppet or Chef. Running a separate Chef Automate instance is unnecessary overhead and cost when Chef Solo will suffice.

Your organization has dozens of AWS accounts. Each owned and ran by different teams, while paying for their own usage directly in their account. In a recent cost review, it was noticed that your teams are all using on-demand instances. Your CTO wants to take advantage of any pricing benefits available to the business from AWS. Another issue that keeps arising involves authentication. It's difficult for your developers to use and maintain their logins across all of their accounts, and it's also difficult for you to control their accessibility. What's the simplest solution that will solve both issues?

Use AWS Organizations to keep your accounts linked and billing consolidated. Create a Billing account for the Organization and invite all other team accounts into the Organization in order to use Consolidated Billing. You can then obtain volume discounts for your aggregated EC2 and RDS usage. Use AWS Single Sign-On to allow developers to sign in to AWS accounts with their existing corporate credentials and access all of their assigned AWS accounts and applications from one place. AWS Single Sign-On allows you to centrally manage all of your AWS accounts managed through AWS Organizations, and it will also allow you to control access permissions based on common job functions and security requirements. Resource: AWS Single Sign-On.

Your CEO has heard how an ELK stack (Elasticsearch, Logstash, and Kibana) would improve your monitoring, troubleshooting, and ability to secure your AWS environment. Without your consultation, they demand you get one up and running as soon as possible using whatever AWS services you need to use. How do you go about it?

Use Amazon managed Elasticsearch services to create your ELK stack. Managing the ELK is hard. When used together, the components of the ELK stack give you the ability to aggregate logs from all your systems. Not only analyze them for problems but also monitor system use and find opportunities for improvement. The data analysis and visualization ELK provides can't be beat. Using a managed service for the ELK helps you to avoid spending time keeping a system up and running, so you can focus on delivering great products to customers. Resources: The benefits of the ELK stack without the operational overhead.

Your organization runs a large amount of workloads in AWS and has automated many aspects of its operation, including logging. As the in-house DevOps engineer, you've received a ticket asking you to log every time an EC2 instance state changes. Normally you would use CloudWatch Events or EventBridge for something like this, but CloudWatch Logs aren't a valid target in CloudWatch Events. How will you solve this?

Use CloudWatch Events or EventBridge, but use a Lambda function target. Write a Lambda function that will perform the logging for you. CloudWatch Events or EventBridge can use a Lambda function as a target, which will solve this issue.

You are developing a completely serverless application and store your code in a Git repository. Your CEO has instructed you that under no circumstances are you allowed to spin up an EC2 instance. In fact, he's blocked access to ec2:* company-wide with an IAM policy. He does, however, still want you to completely automate your development process — you just can't use servers to do so. Which AWS services will you make use of to meet these requirements?

Use Lambda for your compute functions, and use CodeDeploy to deploy the functions for you. Store your code in CodeCommit, and use a CodePipeline to automatically deploy the functions when you commit your code to the repository. Use Lambda for your compute functions, and use CodeDeploy to deploy the functions for you. Use CodePipeline to automatically deploy the functions when you commit your code to the CodeCommit repository.

You currently have a lot of IoT weather data being stored in a DynamoDB database. It stores temperature, humidity, wind speed, rainfall, dew point and air pressure. You would like to be able to take immediate action on some of that data. In this case, you want to trigger a new high or low temperature alert and then send a notification to an interested party. How can you achieve this in the most efficient way?

Use a DynamoDB stream and a Lambda function to trigger only on a new temperature reading. Send a SNS notification if a record is breached. Using a DynamoDB stream is the most efficient way to implement this. It allows you to trigger the Lambda function only when a temperature record is created, thus saving Lambda from triggering when other records are created, such as humidity and wind speed. Reference: DynamoDB Streams Use Cases and Design Patterns

You are contracting for APetGuru, an online pet food store. Their website works fine, but you really want to update the look and feel to be more modern. They have given you strict guidelines that their website cannot be offline at all or they will lose sales. You are thinking of using a rolling deployment so some of their servers are always online during the rollout. Just before you trigger the rollout, you receive a phone call asking if you can only install your updates on a few servers first for testing purposes. It is suggested that a few customers can be redirected to the updated site, and everyone else can still use the old site until you confirm the new one is working properly. Once you're happy with the operation of the updated website, you can complete the rollout to all servers. What do you do?

Use a canary deployment. This allows deployment to a few servers where you can observe how the website is running while still receiving a small amount of customers. If there's an error, it can be rolled back. Otherwise, the rollout will continue on new servers. A canary deployment will allow you to complete the rollout based on the business requirements.

You've spent weeks building a production Linux AMI that your AutoScaling group is using. You realize there's one configuration change you need to perform on all newly created servers due to scaling. You don't have time to build a new AMI until next month due to other work requirements. What's the fastest way to implement the change?

Use a lifecycle hook to make the change during the creation of the new instance. A lifecycle hook will be the fastest way to implement the change until you are able to create a new AMI. Resource: Amazon EC2 Auto Scaling Lifecycle Hooks

You are working in a creative industry, where artists will upload various works of art to your servers before they are packaged up into a single archive each month. They are then sold to customers on a subscription basis where they receive the monthly collection from the artist(s) they support. Your servers are autoscaled, so it's difficult to know which one the artist will use to upload at any one time. You have also decided to serve the collections straight out of S3 instead of storing them locally. What's the most convenient way to manage the uploading and packaging of the art?

Use an EFS (Elastic File System) volume across all upload servers. Package the art into a single zip or tar file with a cron job at the end of the month and upload it to S3. An EFS volume will ensure all files are included across all of your upload servers. Resource: Amazon Elastic File System

Your CloudFormation template is becoming quite large, so you have been tasked with reducing its size and coming up with a solution to structure it in a way that works efficiently. You have quite a few resources defined that are almost duplicates of each other, such as multiple load balancers using the same configuration. What CloudFormation features could you use to help clean up your template?

Use nested stacks. This will allow dedicated templates to be defined for services that are used multiple times. Nested stacks should be used to reuse common template patterns. This will allow dedicated templates to be defined for services that are used multiple times.

Your manager wants you to look into including and provisioning a new authentication service in your CloudFormation templates. Generally this would be simple, but the authentication service is not an AWS service at all, which could make things difficult. The authentication service has an API you can use, which will hopefully ease the pain of solving this problem. How do you solve it?

Use the API to provision the service with an AWS Lambda function. Use a CloudFormation custom resource to trigger the Lambda function. This is an excellent use case for a CloudFormation custom resource.

How can you use machine learning to predict capacity requirements based on historical data from CloudWatch to scale your Auto Scaling group?

Use the EC2 Auto Scaling capability predictive scaling. Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch.

You have an antiquated piece of software across your Linux virtual machines you are attempting to move away from. Unfortunately, the process of extracting the data from this software is difficult and requires a lot of commands to be run on the command line. The final process of moving away from this software is executing 90 commands on each database server. Which AWS services will make this as painless as possible and easiest to set up?

Use the Systems Manager Run Command feature to run a shell script containing the 90 commands on each database server. Using the Systems Manager Run Command feature to run a shell script on the database servers is the best way to achieve this. You do not need to run commands on all servers. Reference: Running Commands from the Console

You have built a serverless Node.js application that uses Lambda, S3, and a DynamoDB database. You'd like to log some simple metrics so you can possibly graph them at a later date or analyze the logs for faults or errors. However, you aren't able to install the CloudWatch Logs agent into a Lambda function. What is the easiest solution?

Use the console.log command in Lambda to output your logs straight to CloudWatch Logs. console.log will work perfectly in Lambda and is the easiest way to log directly to CloudWatch Logs.

Which items are components of AWS Service Catalog?

Users Portfolios Products Users are regular account users in AWS. The key components of Service Catalog are portfolios, products, and users. A portfolio is a collection of products. The key components of Service Catalog are portfolios, products, and users. A product is a service or application for end users. The key components of Service Catalog are portfolios, products, and users

Your organization is running a hybrid cloud environment with servers in AWS and in a local data center. You currently use a cronjob and some Bash scripts to compile your application and push it out to all of your servers via SSH. It's difficult to log, maintain, and extend to new servers when they are provisioned. You've been considering leveraging AWS Developer Tools to host code, build, test, and deploy your applications quickly and effectively. Will it suit your requirements?

Yes, CodeDeploy can deploy to any servers that can run the CodeDeploy agent. You can use CodeDeploy to deploy to both Amazon EC2 instances and on-premises instances. An on-premises instance is any physical device that is not an Amazon EC2 instance that can run the CodeDeploy agent and connect to public AWS service endpoints. You can use CodeDeploy to simultaneously deploy an application to Amazon EC2 instances in the cloud and to desktop PCs in your office or servers in your own data center.

You want to use Jenkins as the build provider in your CI/CD pipeline. Is this possible, and if so how would you implement it?

Yes, it's possible. You can use a CodePipeline plugin for Jenkins and configure a build stage that connects to your Jenkins instance. Using a CodePipeline plugin for Jenkins and configuring a build stage that connects to your Jenkins instance is one of the ways to implement this. Yes, it's possible. CodePipeline will let you select Jenkins as a build provider when you are creating your pipeline. You can connect a Jenkins instance as a build provider for pipelines in your AWS account. Before you connect your Jenkins instance, you need to set up the AWS CodePipeline plugin for Jenkins and configure it for your project.

Is there a way you can instantly, in real-time, be notified by Trusted Advisor if you have a public RDS Snapshot?

Yes, you can create a Lambda function, triggered by the Trusted Advisor Amazon RDS Public Snapshots check, to notify a Slack channel. Using Lambda with a Slack notification would guarantee real-time notification.

Which statement about launch templates and launch configurations is correct?

You can version launch templates. Launch configurations are immutable, but launch templates can be edited. With versioning of launch templates, you can create a subset of the full set of parameters. Then, you can reuse it to create other versions of the same launch template.

Your current application uses an Aurora database; however, the speeds aren't as fast as you would like for your bleeding edge website. You are too deep into developing your application to be able to change the database you are using or to implement faster or larger read replicas. Your application is read-heavy, and the team has identified there are a number of common queries which take a long time to be returned from Aurora. What recommendations would you make to the development team in order to increase your read performance and optimize the application to use Aurora?

You should tell your team to optimize their application by ensuring that, where possible, they engineer the application to make a large number of concurrent queries and transactions as this is one area that Aurora is optimized for. In addition, they should implement ElastiCache between your application and the database. ElastiCache will cache common queries by holding them in memory instead of on disk, and will speed up your application considerably.

You have just created a CodeCommit repository from the command line and want to view details on the repository. What command can you use?

aws codecommit get-repository The command get-repository (prefaced by aws and codecommit to set the proper context for the command) will return details about a single repository. An example of this command would be: aws codecommit get-repository \ --repository-name MyDemoRepo


Related study sets

AP Econ Fall FInal Review Unit 8

View Set

Fitness Wellness and Stress Management

View Set

L1.1. Long-Term Insurance Coverages

View Set

World History Midterm Multiple Choice In Order of Tests

View Set

Ch 9. Rational Emotive Behavior Therapy (REBT)

View Set

Goodness of the Son and the Holy Spirit

View Set

Marketing Research: Swift and Snug Furniture

View Set