AWS CSA Exam1
A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues while keeping costs under control? Choose two. 1. AWS CloudFormation 2. AWS Global Accelerator 3. EC2 placement group 4. S3 5. CloudFront
1. Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It is a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. 2. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS - both physical locations that are directly connected to the AWS global infrastructure and other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers' users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you do not pay for any data transferred between these services and CloudFront. (Incorrect) AWS Global Accelerator AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users.
You manage 12 EC2 instances, and you need to have a central file repository that these EC2 instances can access. What would be the best possible solution for this? 1. Create a custom Lambda function behind API Gateway. Point your EC2 instances to the Lambda function when they need to access the centralized storage system. 2. Create an EFS volume and mount this to the EC2 instances. 3. Create a Route53 EBS storage record and create a network mount on your EC2 instances pointing at the Route53 alias record. 4. Attach a volume to multiple instances with Amazon EBS Multi-Attach.
1. Create an EFS volume and mount this to the EC2 instances. EFS allows you to have centralized storage for your EC2 instances. 2. Attach a volume to multiple instances with Amazon EBS Multi-Attach. You can attach a volume to multiple instances with Amazon EBS Multi-Attach. (Incorrect) Create a custom Lambda function behind API Gateway. Point your EC2 instances to the Lambda function when they need to access the centralized storage system. Lambda is a serverless computing system and cannot be used for storage.
You work for a company that sequences genetics, and they run a high-performance computing (HPC) application that does things such as batch processing, ad serving, scientific modeling, and CPU-based machine learning inference. They are migrating to AWS and would like to create a fleet of EC2 instances to meet this requirement. What EC2 instance type should you recommend? 1. Amazon EC2 T4g instances 2. Amazon EC2 R6g 3. Amazon EC2 M6g 4. Amazon EC2 C7g
Amazon EC2 C7g is a high-compute EC2 instance. (Incorrect) Amazon EC2 T4g instances are powered by Arm-based custom-built AWS Graviton2 processors and are used for a broad set of burstable general-purpose workloads. It is not suitable for high CPU usage.
You work for an online education company that offers a 7-day unlimited access free trial for all new users. You discover that someone has been taking advantage of this and has created a script to register a new user every time the 7-day trial ends. They also use this script to download large amounts of video files, which they then put up on popular pirate websites. You need to find a way to automate the detection of fraud like this using machine learning and artificial intelligence. Which AWS service would best suit this? 1. Amazon Fraud Detector 2. Amazon Detective 3. Amazon Rekognition 4. Amazon Inspector
Amazon Fraud Detector is an AWS AI service that is built to detect fraud in your data. Amazon Detective (incorrect) Using Amazon Detective, you can analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. It is not used for fraud detection.
An application team has decided to leverage AWS for their application infrastructure. The application performs proprietary, internal processes that other business applications utilize for daily workloads. It is built with Apache Kafka to handle real-time streaming, from which virtual machines running the application in docker containers consume the data. The team wants to leverage services that provide less overhead but cause the least disruption to coding and deployments. Which combination of AWS services would best meet the requirements? 1. Amazon SNS 2. Amazon MSK 3. Amazon Kinesis Data Streams 4. Amazon MQ 5. AWS Lambda 6. Amazon ECS Fargate
Amazon Managed Streaming for Apache Kafka (Amazon MSK) This service is meant for applications that currently use or will use Apache Kafka for messaging. It allows for the management of control plane operations in AWS. Reference: https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html Amazon ECS Fargate Fargate containers offer the least disruptive changes while also minimizing the operational overhead of managing the compute services. Reference: https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html
A database outage has been very costly to your organization. You have been tasked with configuring a more highly available architecture that needs to meet an aggressive RTO in case of disaster. You have decided to use an Amazon RDS for MySQL Multi-AZ instance deployment. How is the replication handled for Amazon RDS for MySQL with a Multi-AZ instance configuration? 1. You can configure an Amazon RDS for MySQL standby replica in a different Availability Zone and send traffic synchronously or asynchronously depending on your cost considerations 2. Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Region 3. Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone 4. Amazon RDS for MySQL automatically provisions and maintains an asynchronous standby replica in a different Availability Zone
Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone In a Multi-AZ DB instance deployment, Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. Reference: Multi-AZ DB instance deployments. (Incorrect) Amazon RDS for MySQL automatically provisions and maintains a synchronous standby replica in a different Region The Amazon RDS for MySQL standby replica is established in a separate Availability Zone, not in a different Region. Selected
Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier). One of the results of the audit is that a portion of the infrequently accessed historical data must be rapidly retrieved upon request. Where can you cost-effectively store this data to meet this requirement? S3 Standard S3 Standard-IA Store the data in EBS Amazon S3 Glacier Instant Retrieval
Amazon S3 Glacier Instant Retrieval Amazon S3 Glacier Instant Retrieval delivers the lowest-cost storage for long-lived data that is rarely accessed and enables retrieval in milliseconds. (Incorrect) S3 Standard-IA While S3 Standard-IA is for data that is accessed less frequently and allows rapid access when needed, it is not the most cost-effective solution. S3 Standard-IA offers high durability, rapid access, high throughput, low latency, a low-per-GB storage and retrieval fee, and is good for less frequently accessed files. S3 Storage classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
You want to migrate an on-premises Couchbase NoSQL database to AWS. It would be best if you had this to be as resilient as possible and want to minimize any management of servers. Preferably, you would like to go serverless. Which database should you choose? 1. DynamoDB 2. Aurora DB 3. RDS 4. Elasticache
DynamoDB is a NoSQL database and has serverless deployment. (Incorrect) Aurora DB is an SQL-based database.
A small company hires a consultant to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of security groups? Security groups act at the instance level, not the subnet level. Security groups act at the VPC level, not the instance level. Security groups are stateless. Security groups are stateful. Security groups act at the subnet level, not the instance level.
Security groups act at the instance level, not the subnet level. The following are the basic characteristics of security groups for your VPC: There are quotas on the number of security groups you can create per VPC, the number of rules you can add to each security group and the number of security groups you can associate with a network interface. You can specify allow rules but not deny rules. You can specify separate rules for inbound and outbound traffic. When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group. By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. No outbound traffic originating from your instance is allowed if your security group has no outbound rules. Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html Security groups act at the VPC level, not the instance level. A VPC-level security group is created when a VPC is provisioned. (Incorrect) Security groups are stateless. Security groups are stateful.
You are designing an architecture that will house an Auto Scaling group of EC2 instances. The application hosted on the instances is expected to be extremely popular. Forecasts for traffic to this site predict very high traffic, and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. To meet this high traffic requirement, you must select the correct load balancer type to front your Auto Scaling group. Which load balancer should you select? 1. You will need an Application Load Balancer to meet this requirement. 2. All the AWS load balancers meet and perform the same requirements. 3. You will need a Classic Load Balancer to meet this requirement. 4. You will need a Network Load Balancer to meet this requirement.
A Network Load Balancer to meet this requirement. If extreme performance is needed for your application, AWS recommends that you use a Network Load Balancer. A Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC based on IP protocol data. It is ideal for load-balancing TCP and UDP traffic and can handle millions of requests per second while maintaining ultra-low latencies. It is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM). https://aws.amazon.com/elasticloadbalancing/network-load-balancer/ (Incorrect) Application Load Balancers cannot meet this requirement. A Network Load Balancer operates at the transport layer (Layer 4) and can provide the high performance necessary in this scenario.
You lead a small development team for an LLC. The team is beginning to plan the development of a full-stack web application leveraging Next.js with SSR on the frontend, and they want to leverage the AWS cloud for it. Unfortunately, nobody on the team knows AWS services and best practices, so they want to avoid complex infrastructure designs. Which AWS service could the development team use to simplify the development and deployment of their full-stack application? 1. AWS Lambda with Amazon RDS 2. Amazon EKS with DynamoDB 3. AWS Amplify 4. AWS AppSync
AWS Amplify AWS Amplify offers developers a set of tools for easily deploying full stack applications to AWS. It offers Git-based workflows with automated deployments for web applications, and it is especially suited for developers not as familiar with AWS who need to also leverage server-side rendering. Reference: https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html (Incorrect) AWS AppSync AWS AppSync is a service that offers a managed GraphQL interface for real-time data calls.
You have joined a newly formed software company as a Solutions Architect. It is a small company, and you are the only employee with AWS experience. So that you know - the owner has asked for your recommendations to ensure that the AWS resources are deployed to proactively remain within budget. Which AWS service can you use to help ensure you do not have cost overruns for your AWS resources? 1. Cost Explorer 2. AWS Budgets 3. Inspector 4. Billing and Cost Management
AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. And remember the keyword proactively. With AWS Budgets, we can be proactive about attending to cost overruns before they become a major budget issue at the end of the month or quarter. Budgets can be tracked monthly, quarterly, or yearly, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic. You can also use AWS Budgets to set a custom reservation utilization target and receive alerts when your utilization drops below the threshold you define. RI utilization alerts support Amazon EC2, Amazon RDS, Amazon Redshift, and Amazon ElastiCache reservations. Budgets can be created and tracked from the AWS Budgets dashboard or via the Budgets API. https://aws.amazon.com/aws-cost-management/aws-budgets/ Cost Explorer (Incorrect) - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. While this is a useful tool, it is not the best option among the available choices.
A data analytics company is running its software in the AWS cloud. The current workflow process leverages Amazon EC2 instances, which process different datasets, normalize them, and then outputs them to Amazon S3. The datasets average around 100 GB in size. The company CTO has asked that the application team start looking into leveraging Amazon EMR to generate reports and enable further analysis of the datasets and then store the newly generated data in a separate Amazon S3 bucket. Which AWS service could be used to make this process efficient, more cost-effective, and automated? 1. Amazon EventBridge 2. AWS Data Pipeline 3. Amazon EMR Spot Capacity 4. AWS Lambda
AWS Data Pipeline is a managed service that lets you implement data-driven workflows to move data between the listed resources within AWS automatically. It executes and provides methods of tracking data ETL processes. Reference: https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html (Incorrect) Amazon EventBridge is not meant for data ingestion or data migrations. Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. Reference: https://aws.amazon.com/eventbridge/faqs/
Your company develops a specialized web application recently converted to a mobile application on iOS and Android. Currently, the QA team is testing on their personal devices. You want to offload the device testing from the team's personal devices and instead leverage AWS for automated testing. Which AWS service can provide real, physical tablets and phones for automated testing of mobile applications? 1. AWS AppSync 2. AWS Device Farm 3. AWS Device 4. AWS Lambda Mobile
AWS Device Farm AWS Device Farm offers real mobile devices hosted in and by AWS to be used for testing mobile applications and web applications. You can perform automated tests via scripts on the devices or even perform remote testing that allows you to use gestures and swipes on the mobile device. Reference: What is AWS Device Farm? (Incorrect) AWS AppSync AWS AppSync is a service that offers a managed GraphQL interface for real-time data calls.
Which of the following statements regarding AWS Free Tier are true? 1. Among the always free services provided by AWS Free Tier is 1 million free AWS Lambda requests per month. 2. 12-month AWS Free Tier offers are only available to new AWS customers. 3. AWS Free Tier includes free trials, specific amounts of specified services for 12 months, and many always free services. 4. 5 GB of standard Amazon S3 storage is provided with the AWS Free Tier's always free offer.
AWS Free Tier includes 1 million free AWS Lambda requests per month. AWS Lambda is a serverless, event-driven compute service managed by AWS. 12-month AWS Free Tier offers are only available to new AWS customers. AWS Free Tier offers are only available to new AWS customers and are available for 12 months from the sign-up date. AWS Free Tier includes free trials, specific amounts of specified services for 12 months, and many always free services. With AWS Free Tier, various services are provided that include free trials, 12 months free of selected services (with limits), and many always free services. (Incorrect) 5 GB of free standard Amazon S3 storage is provided for 12 months with AWS Free Tier. Amazon Simple Storage Service (Amazon S3) is Amazon's object storage service and integrates with many AWS services.
You are working with a renowned government organization responsible for one of the best gardens in the nation. The organization is demonstrating a new type of greenhouse: a smart greenhouse. The greenhouse has hundreds of sensors to monitor humidity, temperature, sunlight conditions, etc. They need to collate all this information in the cloud so the application makes smart decisions, such as when to water individual plants, open a window in the greenhouse to lower the temperature, etc. What AWS service would be best to collate these hundreds of sensors? Application Load Balancer Neptune CloudFormation AWS IoT Core
AWS IoT Core This is the best answer in this scenario as it is used for data collation. (Incorrect) Neptune is used to build and run graph applications with highly connected datasets. It is not used for data collation. Selected
Your company has recently been charged overuse fees for software licenses for your Oracle products. Currently, there is a hybrid environment with Amazon EC2 instances running databases and on-premises virtual machines running Oracle databases. It has been determined that the Oracle software must continue to be used, and there will not be a shift to other database products. You are assigned to find a better way to manage license usage to avoid charges in the future. Which AWS service can help you accomplish this? AWS Control Tower AWS Service Catalog AWS Budgets AWS License Manager
AWS License Manager You can use this to manage cloud-based and on-premises software licenses to prevent abuse and overages. It works with several well-known vendors, including Oracle. Reference: (Incorrect) AWS Service Catalog End users can leverage this catalog to deploy preapproved IT services via CloudFormation templates.
You work in healthcare for an IVF clinic. You host an application on AWS that allows patients to track their medication during IVF cycles. The application also allows them to view test results containing sensitive medical data. You have a regulatory requirement that the application is secure. You must use a firewall managed by AWS that enables control and visibility over VPC-to-VPC traffic and prevents the VPCs hosting your sensitive application resources from accessing domains using unauthorized protocols. What AWS service would support this? 1. AWS Firewall Manager 2. AWS PrivateLink 3. AWS WAF 4. AWS Network Firewall
AWS Network Firewall AWS manages the AWS Network Firewall infrastructure, so you do not have to worry about building and maintaining your own network security infrastructure. AWS Network Firewall's stateful firewall can incorporate context from traffic flows, like tracking connections and protocol identification, to enforce policies such as preventing your VPCs from accessing domains using an unauthorized protocol. AWS Network Firewall gives you control and visibility of VPC-to-VPC traffic to logically separate networks hosting sensitive applications or line-of-business resources.
You have joined a small startup, and they are trying to figure out what cloud platform you will use to host your application. Your boss asks you to identify the best way to anticipate what the AWS costs will be. What do you think you should recommend? AWS Trusted Advisor AWS Pricing Calculator AWS Billing Alert AWS Cost Explorer
AWS Pricing Calculator This would be the best option and is available at AWS Pricing Calculator. (Incorrect) AWS Cost Explorer This explores previous costs and is not used as a calculator.
You work for a real estate company with many different batch processes they need to automate, such as patch management and data synchronization. You need to automate this and integrate it with AWS services and do this serverless if possible. What AWS service should you use? 1. AWS Step-Functions 2. AWS X-Ray 3. AWS Batch Manager 4. AWS Batch Assist
AWS Step-Functions AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. (Incorrect) AWS Batch Manager There is no such AWS service as AWS Batch Manager. There is AWS Batch, which enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. However, this is not suitable for things like patch management.
You host a serverless website on AWS using API Gateway, Lambda, and DynamoDB. It is highly resilient. However, recently, database updates seem to be getting lost between the client and the end DynamoDB table. You need to trace what is happening. What AWS service should you use to troubleshoot this issue? 1. CloudWatch 2. AWS X-Ray. AWS X-Ray helps developers analyze and debug production distributed applications, such as those built using a microservices architecture. 3. CloudTrail 4. AWS Track and Trace.
AWS X-Ray. AWS X-Ray helps developers analyze and debug production distributed applications, such as those built using a microservices architecture. (Incorrect) CloudTrail CloudTrail is used for recording API calls in an AWS account. Selected
An online media company has created an application that provides analytical data to its clients. The application is hosted on EC2 instances in an Auto Scaling Group. You have been brought on as a consultant and added an Application Load Balancer to front the Auto Scaling Group and distribute the load between the instances. The VPC that houses this architecture is running IPv4 and IPv6. The last thing you need to do to complete the configuration is point the domain name to the Application Load Balancer. Using Route 53, which record type at the zone apex will you use to point the DNS name of the Application Load Balancer? Alias with an AAAA-type record set Alias with an A-type record set Alias with a CNAME record set Alias with an MX-type record set
Alias with an AAAA-type record set An Alias with a type "A" record set and an Alias with a type "AAAA" record set are correct. To route domain traffic to an ELB, use Amazon Route 53 to create an alias record that points to your load balancer. Reference: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html Alias with an A-type record set An Alias with a type "AAAA" record set and an Alias with a type "A" record set are correct. To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a Route 53 extension to DNS. (Incorrect) Alias with a CNAME record set Alias with a type "CNAME" record set is incorrect because you cannot create a CNAME record at the zone apex. Reference: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
Your company uses IoT devices installed in businesses to provide those businesses with real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Do you know which service listed is not a viable solution to stream the real-time data to? 1. Redshift 2. ElasticSearch 3. Athena 4. S3
Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena. Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you already use today. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. https://aws.amazon.com/kinesis/data-firehose/ (Incorrect) ElasticSearch ElasticSearch is an ideal service to stream data to and an ideal format for searching on. Selected
You work for a political party that is about to run a popular governor for re-election. The marketing arm is ingesting a large amount of content from social media accounts, and they need to run sentiment analysis on this data to figure out who they should target their advertisements at. Which AWS service would be best suited for this? Amazon Comprehend Amazon Polly Amazon Textract Amazon Kendra
Amazon Comprehend Amazon Comprehend uses natural language processing (NLP) to help you understand the meaning and sentiment in your text. (Incorrect) Amazon Textract Amazon Textract uses machine learning to extract text, handwriting, and data from scanned documents automatically. It is not used for sentiment analysis.
You work for an online betting company that recently had a major security breach. The CSO needs you to urgently review the root cause of this breach using artificial intelligence and machine learning. What AWS service can you use that will fulfill this requirement? Amazon Detective AWS Firewall Manager Amazon Inspector Amazon Inspector AWS Audit Manager
Amazon Detective simplifies the investigative process and helps security teams conduct faster and more effective investigations. With the Amazon Detective prebuilt data aggregations, summaries, and context, you can quickly analyze and determine the nature and extent of possible security issues. Reference: https://aws.amazon.com/detective/ (Incorrect) Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. It is not used for root cause analysis of breaches using AI and ML. Reference: https://aws.amazon.com/inspector/
You are a database administrator working for a small start-up that has just secured Venture Capital (VC) funding. As part of the new investment, the VCs have asked you to ensure that your application has minimum downtime. Currently, your backend is hosted on a dedicated cluster running MongoDB. You spend a lot of time managing the cluster, configuring backups, and trying to ensure there is no downtime. You would like to migrate your MongoDB database to the AWS cloud. What service should you use for your backend database, assuming you do not want to make any changes to your database and application? 1. DynamoDB 2. Aurora Serverless 3. Amazon DocumentDB 4. AWS RDS
Amazon DocumentDB best suits the scenario. (Incorrect) Aurora Serverless is a relational database service and does not support MongoDB. Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed enterprise document database service that supports native JSON workloads. What does "MongoDB-compatible" mean? "MongoDB compatible" means that Amazon DocumentDB interacts with the Apache 2.0 open-source MongoDB 3.6, 4.0, and 5.0 APIs. Reference: https://aws.amazon.com/documentdb/faqs/
You work for a large chip manufacturer in Taiwan with a large dedicated cluster running MongoDB. Unfortunately, they have an extensive period of downtime and would now like to migrate their MongoDB instance to the AWS cloud. They want to keep their application architecture the same. What AWS service would you recommend using for MongoDB? 1. Amazon DocumentDB 2. Amazon QLDB 3. Amazon Neptune 4. Aurora Serverless
Amazon DocumentDB supports MongoDB and would be suitable in this scenario. (Incorrect) Amazon QLDB is a ledger database service and does not support MongoDB.
You work for a large investment bank that is migrating its applications to the cloud. The bank is developing a custom fraud detection system using Python in Jupyter Notebook. They then build and train their models and put them into production. They want to migrate to the AWS Cloud and are looking for a service that would meet these requirements. Which AWS service would you recommend they use? 1. Amazon Forecast 2. Amazon Comprehend 3. Amazon SageMaker 4. Amazon Fraud Detector
Amazon SageMaker is a fully managed machine learning service. Amazon SageMaker allows you to build and train machine learning models and then directly deploy them into a production-ready hosted environment. (Incorrect) Amazon Fraud Detector is an AI service that detects fraud in your data. You would not use it to create and train models before deploying them into production.
Your application team stores critical data within a third-party SaaS cloud vendor. The data comes from an internal application that runs on Amazon ECS Fargate, which then stores the data within Amazon S3 in a proprietary format. AWS Lambda functions are triggered via Amazon S3 event notifications to trigger data transfer to a SaaS application. Due to resource and time limits, you are exploring other means of completing this workflow of transferring data from AWS to the SaaS solution. Which AWS service offers the most efficiency and has the least operational overhead? 1. Amazon EKS 2. Amazon AppFlow 3. Amazon EventBridge 4. AWS Step Function Fargate Capacity
AppFlow offers a fully managed service for easily automating the bidirectional data exchange to SaaS vendors from AWS services like Amazon S3. This helps avoid resource constraints. https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html https://docs.aws.amazon.com/appflow/latest/userguide/app-specific.html (Incorrect) AWS Step Function Fargate Capacity is not a real solution.
You are building an application on EC2 that will require access to S3 and DynamoDB. You need to provide your developers access to these services in the most secure way possible. What should you use to achieve this? 1. Create a master IAM username and password with Admin Access and share this username and password with your developers. 2. Assign a role to the EC2 instance, allowing access to S3 and DynamoDB. 3. Create a master IAM username and password with power user access and share this username and password with your developers. 4. Use AWS KMS to provide access to these resources.
Assign a role to the EC2 instance, allowing access to S3 and DynamoDB. This would be the most efficient solution. (Incorrect) Use AWS KMS to provide access to these resources. AWS KMS is not capable of doing this. This is technically incorrect.
You have recently terminated an employee from the company due to gross misconduct. Unfortunately, you discover they have been using a backdoor account to access your internal web application. They have been doing this from their home, which has a static IP address. You need to block access from this IP address immediately and then fix the backdoor. What is the fastest approach to achieve this? 1. Block the IP address using an Internet Gateway. 2. Use Amazon Shield to block the IP address. 3. Block the IP address using a security group. 4. Block the IP address on the public NACL.
Block the IP address on the public NACL. This is the best solution as it is a static IP address. (Incorrect) Block the IP address using a security group. You cannot block IP addresses using security groups.
A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling Group has scaled to maximum capacity, but there are still requests being lost. The company has decided to use SQS with the Auto Scaling Group to ensure all messages are saved and processed. What is an appropriate metric for Auto Scaling with SQS? Backlog per hour Backlog per instance Backlog per user CPU utilization
Backlog per instance The issue with using a CloudWatch Amazon SQS metric like ApproximateNumberOfMessagesVisible for target tracking is that the number of messages in the queue might not change proportionally to the size of the Auto Scaling Group that processes messages from the queue. That is because the number of messages in your SQS queue does not solely define the number of instances needed. The number of instances in your Auto Scaling Group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency (queue delay). The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's running capacity, which for an Auto Scaling Group is the number of instances in the InService state, to get the backlog per instance. Reference: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html (Incorrect) Backlog per user The primary concern is the number of messages in the SQS queue and the instances processing those messages. In general AWS practices, while user-based metrics might be relevant for certain applications, especially those with user-specific workloads, it's not a standard metric for scaling with SQS. The emphasis is on ensuring that messages in the queue are processed efficiently, regardless of the user origin. Selected
You have a dynamic e-commerce website hosted on an EC2 instance that (as traffic increases) becomes slower and slower. What scenario below would help increase the speed of the website? Change the EBS volume from gp2 to Provisioned IOPS. Turn on CloudWatch website acceleration to reduce the load on your EC2 instance. Migrate the website to S3 and turn on S3 dynamic content. Replace the EC2 instance with a single Lambda function.
Change the EBS volume from gp2 to Provisioned IOPS. By using Provisioned IOPS, you will increase the speed of the disk and enhance the site. Migrate the website to S3 and turn on S3 dynamic content. This is technically incorrect, as S3 does not have dynamic content.
You have a solution that is hosted in US-West-1 consisting of a custom VPC, 20 EC2 instances, RDS instances, an application load balancer and an AutoScaling Group. Unfortunately, the entire region goes down for a few hours and your business suffers a large loss of revenue. Your manager decides he wants scripted infrastructure so that, if a region goes down, you will be able to run a script in another region and create a copy of your environment, including all the custom VPC configurations. What AWS service should you use? 1. AWS Elastic Beanstalk 2. Lambda 3. Amazon Cloud Backup Provisioning Service (ACBPS) 4. CloudFormation
CloudFormation AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. (Incorrect) AWS Elastic Beanstalk Although Elastic Beanstalk could provision the environment, it would not set up custom VPC configurations.
You have a web application hosted on a series of EC2 instances with an Application Load Balancer in front of them. You have created a new CloudFront distribution. You then set up its origin to point to your ALB. You will need to provide access to hundreds of private files that you are able to serve your CloudFront distribution. What do you think you should use? 1. CloudFront Origin Access Identity 2. CloudFront Signed URLs 3. CloudFront signed cookies 4. CloudFront HTTPS encryption
CloudFront-signed cookies are useful when you want to access multiple files. (Incorrect) CloudFront Origin Access Identity is used for authentication internally between S3 and CloudFront and would not give you secure access to hundreds of files.
You have an internal data warehouse running custom database software on an on-premises server at your work. You need to migrate this application to AWS. Unfortunately, your application is incompatible with Redshift, so you must run this on a custom EC2 instance with the appropriate EBS volumes. You must use an EBS volume to handle large, sequential I/O operations for this infrequently accessed data. What is the most cost-effective EBS volume to meet this requirement? Throughput Optimized HDD (st1) Cold HDD (sc1) Provisioned IOPS SSD (io1) EBS General Purpose SSD (gp2)
Cold HDD (sc1) is the best low-cost choice for infrequently accessed data with sequential I/O operations. (Incorrect) Throughput Optimized HDD (st1) This is a more expensive option.
You work at a small startup with a stringent budget. Recently, they experienced a surge of demand on their web frontend servers, which triggered an auto-scaling event. The auto-scaling was not configured correctly, and it did not scale down after the surge, resulting in a massive bill at the end of the month. It would be best to find a way to alert the founders when your AWS bill reaches a certain threshold and do this as cost-effectively and efficiently as possible. What do you think is the best way to do this? Use Lambda to monitor your AWS bill and then trigger an SNS notification when your bill hits a certain level. Repetitively hit F5 on the AWS billing page. Use AWS Trusted Advisor to create an automatic alert. Create a billing alarm in CloudWatch for the specified amount.
Create a billing alarm in CloudWatch for the specified amount. This would be the most cost-effective and efficient way of achieving this. Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html (Incorrect) Use Lambda to monitor your AWS bill and then trigger an SNS notification when your bill hits a certain level. Using Lambda would not be the most efficient and cost-effective way of achieving this.
You plan to migrate a complex big data application to AWS using EC2. The application requires complex software to be installed, which typically takes a couple of hours. You need this application to be behind an Auto Scaling group so that it can react in a scaling event. How do you recommend speeding up the installation process during scale-out events? 1. Create a golden AMI with the software pre-installed. 2. Create a bootstrap script to install the software automatically. 3. Create an EBS volume with PIOPS for faster installation performance. 4. Pre-deploy the software on an Application Load Balancer to automatically install it on the EC2 instance when there is a scaling event.
Create a golden AMI with the software pre-installed. This golden AMI would have the software pre-installed and would be ready to use in a scaling event. (Incorrect) Create a bootstrap script to install the software automatically. This would not speed up the installation process as the software would still need several hours to install.
Your company is currently building out a second AWS region. Following best practices, they have been using CloudFormation to make the migration easier. They have run into a problem with the template, however. Whenever the template is created in the new region, it still references the AMI in the old region. What is the best solution to automatically select the correct AMI when the template is deployed in the new region? Update the AMI in the old region, as AMIs are universal. Create a mapping in the template. Define the unique AMI value per region. Create a Parameter section in the template. Whenever the template is run, fill in the correct AMI ID. Create a condition in the template to automatically select the correct AMI ID.
Create a mapping in the template. Define the unique AMI value per region. This is exactly what mappings are built for. By using mappings, you can easily automate this issue. Make sure to copy your AMI to the region before you try to run the template, though, as AMIs are region-specific. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html (Incorrect) Create a condition in the template to automatically select the correct AMI ID. While this would technically work, there are better options. Conditions are designed to differentiate between different environments, types of resources, etc. You could create a condition to handle this, but it would require extra work. The best idea would be to use a mapping to automatically fill in the correct ID based on the region in which you are deploying the template. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/mappings-section-structure.html
You have a large number of files in S3, and you have been asked to build an index of these files. In order to do this, you need to read the first 250 bytes of each object in S3. This data contains some metadata about the content of the file itself. Unfortunately, there are over 10,000,000 files in your S3 bucket, and this is about 100 TB of data. The data will then need to be stored in an Aurora Database. How can you build this index in the fastest way possible? 1. Use AWS Athena to query the S3 bucket for the first 250 bytes of data. Take the result of the query and build an Aurora Database. 2. Create a program to use a byte range fetch for the first 250 Bytes of data and then store this in the Aurora Database. 3. Use the index bucket function in AWS Macie to query the S3 bucket and then load this data into the Aurora Database.
Create a program to use a byte range fetch for the first 250 Bytes of data and then store this in the Aurora Database. This would be the fastest and easiest way to achieve your aims. What is byte range fetch? Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. (Incorrect) Use the index bucket function in AWS Macie to query the S3 bucket and then load this data into the Aurora Database. Macie is a service used to identify PII and is not used to handle byte-range requests.
The AWS team in a large company spends a lot of time monitoring EC2 instances and maintenance when the instances report health check failures. How can you most efficiently automate this monitoring and repair? 1. Create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance if a health check fails. 2. Create a cron job that monitors the instances periodically and starts a new instance if a health check has failed. 3. Create a Lambda function triggered by a failed instance health check. Have the Lambda function deploy a CloudFormation template, which can perform the creation of a new instance. 4. Create a Lambda function triggered by a failed instance health check. Have the Lambda function destroy the instance and spin up a new instance.
Create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance if a health check fails. You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance. The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures). An instance reboot is equivalent to an operating system reboot. In most cases, it takes only a few minutes to reboot your instance. When you reboot an instance, it remains on the same physical host, so your instance keeps its public DNS name, private IP address, and any data on its instance store volumes. Rebooting an instance does not start a new instance billing hour, unlike stopping and restarting your instance. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html
You have a web application running on an Amazon EC2 instance in a Virtual Private Cloud (VPC). You want to allow external users to access your web server over HTTP (port 80) while keeping your server secure. Which of the following actions should you take regarding the associated security group? Create an inbound rule in the security group that allows incoming traffic on port 80 from your office IP address. Create an outbound rule in the security group that allows outgoing traffic on port 80 to 0.0.0.0/0 (any IP address) and configure an inbound rule for responses. Create an inbound rule in the security group that allows incoming traffic on port 80 from 0.0.0.0/0 (any IP address). You do not need to add rules; by default, security groups allow all inbound traffic.
Create an inbound rule in the security group that allows incoming traffic on port 80 from 0.0.0.0/0 (any IP address). As this web server will serve internet traffic, you would need to open port 80 from any IP address for this to work. (Incorrect) Create an outbound rule in the security group that allows outgoing traffic on port 80 to 0.0.0.0/0 (any IP address) and configure an inbound rule for responses. By default, a security group is configured to block all inbound traffic and allow all outbound traffic. This configuration, therefore, would not work.
Which AWS solution resolves the issue of when AWS accounts do not follow the required compliance rules set forth by a company and also allows for sending alerts on configuration changes and preventing specific actions from occurring? 1. Create individual AWS Config rules in each AWS account. Set up AWS Lambda functions in each AWS account to remediate any suspected drift. 2. Create new AWS accounts using AWS Control Tower. Leverage the preventative and detective guardrails that come with it to prevent governance drift, as well as send alerts on suspicious activities. 3. Create a set of Global AWS Config rules that can cover all Regions in the management account that apply to the member accounts. Set up an AWS Lambda function in the management AWS account to alert an administrator when drift is detected.
Create new AWS accounts using AWS Control Tower. Leverage the preventative and detective guardrails that come with it to prevent governance drift and send alerts on suspicious activities. AWS Control Tower allows you to implement account governance and compliance enforcement for an AWS organization. It leverages SCPs for preventative guardrails and AWS Config for detective guardrails. Reference: https://docs.aws.amazon.com/controltower/latest/userguide/guardrails.html (Incorrect) Create a set of Global AWS Config rules that can cover all Regions in the management account that apply to the member accounts. Set up an AWS Lambda function in the management AWS account to alert an administrator when drift is detected. There is no such thing as Global AWS Config rules.
You are working for a small startup that wants to design a content management system (CMS). The company wants to architect the CMS so that the company only incurs a charge when someone tries to access their content. Which three services could be used to help create a CMS while prioritizing pay-per-use pricing services as to allow for minimal cost? 1. Application Load Balancer 2. DynamoDB 3. S3 4. EC2 5. API Gateway
DynamoDB is a NoSQL database service that provides fast and predictable performance with seamless scalability. It operates on a pay-per-read and pay-per-write basis, ideal for a CMS that desires to minimize costs when the system is not in use. By storing metadata or content within DynamoDB, the startup only incurs charges when read/writes occur. Amazon S3 is an optimal choice for storing and retrieving any amount of data, making it a suitable backbone for a CMS. The pay-as-you-go pricing model of S3 aligns well with the requirement of incurring charges only when content is accessed. API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It operates on a pay-per-use model, charging for the number of API calls made, making it a cost-effective solution for a CMS architecture where costs are incurred only when content is accessed. API Gateway's integration with other AWS services like Lambda can help build a serverless CMS, further aligning with the pay-per-use pricing model. (Incorrect) Amazon EC2 operates on a more traditional, instance-based pricing model, where you pay for compute capacity by the hour or second, regardless of usage. This aligns differently from the startup's objective of incurring charges only when someone tries to access their content, as costs are incurred for as long as the EC2 instances are running, whether or not they are being accessed.
What EBS Volume type gives you the highest performance in terms of IOPS? 1. EBS Provisioned IOPS SSD (io1) 2. EBS Provisioned IOPS SSD (io2 Block Express) 3. EBS General Purpose SSD (gp3) 4. EBS Provisioned IOPS SSD (io2)
EBS Provisioned IOPS SSD (io2 Block Express) is the highest-performance SSD volume designed for business-critical latency-sensitive transactional workloads. (Incorrect) Although EBS Provisioned IOPS SSD (io2) is designed for high performance, EBS Provisioned IOPS SSD (io2 Block Express) is the highest-performance SSD volume designed for business-critical latency-sensitive transactional workloads.
The company you work for has reshuffled teams a bit, and you have been moved from the AWS IAM team to the AWS Network team. One of your first assignments is reviewing the main VPCs subnets. What are two key concepts regarding subnets? 1. Each subnet is associated with one security group. Incorrect. The one-to-one relationship that subnets have is with Availability Zones. 2. Private subnets can only hold databases. 3. Every subnet you create is associated with the main route table for the VPC. 4. Each subnet maps to a single Availability Zone. 5. A subnet spans all the Availability Zones in a Region.
Every subnet you create is associated with the main route table for the VPC. Each subnet must be associated with a route table specifying the allowed routes for outbound traffic leaving the subnet. Every subnet that you create is automatically associated with the main route table for the VPC. You can change the association, and you can change the contents of the main route table. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#SubnetRouting Each subnet maps to a single Availability Zone. When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside within one Availability Zone and cannot span zones. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-subnet-basics (Incorrect) Each subnet is associated with one security group. The one-to-one relationship that subnets have is with Availability Zones.
Your development team leverages Amazon ECS Fargate to run their containerized application in AWS. At this time, they are leveraging an internal image registry on Amazon EC2 instances for hosting image repositories due to requiring image scanning for software vulnerabilities. You have suggested they explore other options to avoid the operational overhead and unnecessary costs associated with the solution. What is the best solution for the scenario? 1. Use Amazon S3 for image storage and Amazon Athena for querying for vulnerabilities. 2. Use Amazon S3 for image storage and Amazon Macie to find compromised images. 3. Shift to the public Docker Hub. Image scanning is automatic. 4. Host images in Amazon ECR repositories with scan on push enabled.
Host images in Amazon ECR repositories with scan on push enabled. Amazon ECR offers the ability to enable scan on push, which enables software vulnerability scanning of all images pushed to your repositories. (Incorrect) Use Amazon S3 for image storage and Amazon Athena for querying for vulnerabilities. Amazon Athena does not do vulnerability scanning.
You work for an online store that has a large number of EC2 instances. The company had a list of public keys and public IP addresses of the individual EC2 instances stored in an S3 bucket. However, this was accidentally deleted by an intern. You need to rebuild this list using an automated script. You create a script running a curl command to get the data on each EC2 instance and to write this to an S3 bucket. Which of the following should you query? 1. Amazon EBS 2. Amazon Machine Image 3. Instance Metadata 4. Instance Userdata
Instance Metadata Instance Metadata describes all the data about the EC2 instance. (Incorrect) Instance Userdata Instance Userdata describes what bootstrap script was run when creating the instance.
You need to host a static website on S3. Your boss asks you to register a domain name for the website with Route 53. What is a prerequisite to ensure that you can achieve this? 1. You must enable CORS in your S3 bucket in order to enable a static website. 2. You must have a bucket name that is the same as the domain name. 3. You must create an A Record in Route 53 to point to your bucket. 4. You must configure a CNAME in Route 53 to point to your DNS address of your bucket.
It would be best if you had a bucket name that is the same as the domain name. The bucket name must always be the same as the domain name. (Incorrect) You must create an A Record in Route 53 to point to your bucket. Creating an A Record in Route 53 to point to your bucket would not resolve the issue.
Your company has decided to begin migration efforts from on-premises data centers to the AWS cloud. Currently, the data centers host several virtual machines, including vSphere VMs and Hyper-V VMs. You have been asked to find the easiest and most efficient method of migrating all the VMs to AWS as Amazon EC2 AMIs while minimizing the potential downtime. Which AWS service is the best fit for this? Leverage the AWS Application Migration Service (AWS AMS) to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2. Use AWS Refactor Service (AWS RFS) to perform incremental migrations of all VMs in the data center. Start the process via AWS Migration Hub to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2. Enable AWS DMS to perform migrations of all VMs in the data center incrementally.
Leverage the AWS Application Migration Service (AWS AMS) to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2. AWS AMS helps minimize downtime due to the incremental nature of the migrations. You can easily migrate existing virtual machines from vCenter to AWS Amazon EC2 instances. Reference: https://aws.amazon.com/application-migration-service/when-to-choose-aws-mgn/ (Incorrect) Start the process via AWS Migration Hub to incrementally perform migrations of all VMs in the data center to AWS as AMIs for Amazon EC2. AWS Migration Hub does not perform migrations, as it is meant to provide a single place to view and track existing and new migration efforts between other AWS services.
A large financial institution is gradually moving its infrastructure and applications to AWS. The company has data needs that will utilize RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift? 1. Can be used to improve latency and throughput for many read-heavy application workloads significantly. 2. Cloud-based relational database. 3. Near real-time complex queries on massive data sets. 4. Key-value and document database that delivers single-digit millisecond performance at any scale.
Near real-time complex querying on massive data sets. Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. https://aws.amazon.com/redshift/faqs/ (Incorrect) Cloud-based relational database. This describes RDS.
A company has an Auto Scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in considerable losses of profit. Therefore, the architecture includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover of an RDS database typically be completed? One to two minutes. Almost instantly Within an hour Under 10 minutes
One to two minutes. Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS flips your DB instance's canonical name record (CNAME)to point at the standby, which is promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; using adequately large instance types is recommended with Multi-AZ for best results. AWS recommends using Provisioned IOPS with Multi-AZ instances for fast, predictable, consistent throughput performance. https://aws.amazon.com/rds/faqs/ (Incorrect) Almost instantly The failover will take a minute or two.
You run a popular retro gaming merchandise retail platform on AWS. Over the past year and a half, you have noticed that your traffic has distinct daily and weekly patterns—for example, traffic surges during weekday business hours and a massive drop during the weekends. To add to the issues, your application takes considerable time to initialize, causing a noticeable latency impact during scale-out events. You need to ensure that your infrastructure scales in anticipation of these patterns. Which AWS Auto Scaling feature would best address this scenario? 1. Dynamic scaling 2. Scheduled scaling 3. Predictive scaling 4. Manual scaling
Predictive scaling uses machine learning to forecast traffic and capacity needs. Predictive scaling is more flexible and can adapt to changes in traffic patterns, which is why it is the best choice for the given scenario. Predictive scaling can automatically scale your resources in anticipation of these patterns if your traffic spikes during weekdays and drops during weekends. This will help ensure you have enough resources to handle peak traffic without causing latency issues. (Incorrect) Scheduled scaling allows you to increase or decrease the number of instances in your Auto Scaling group based on a specific schedule. While it can be set up to scale based on known traffic patterns, it does not automatically adjust to changes in those patterns. In the given scenario, while scheduled scaling can handle the predictable surge in traffic during weekdays and drop during weekends, it would not be as adaptive as predictive scaling.
A new startup company decided to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet must have a globally unique IP address to ensure internet access. Which is not a globally unique IP address? 1. Elastic IP address 2. IPv6 address 3. Private IP address 4. Public IP address
Private IP address is correct. Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses. The IPv4 addresses known for not being unique are private IPs. These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255. Reference: http://www.faqs.org/rfcs/rfc1918.html (Incorrect) Elastic IP address This is a globally unique address.
You are a solutions architect working for a biotech company that has a large private cloud deployment using VMware. You have been tasked to setup their disaster recovery solution on AWS. What is the simplest way to achieve this? 1. Purchase VMware Cloud on AWS, leveraging VMware disaster recovery technologies and the speed of AWS cloud to protect your virtual machines 2. Deploy an EC2 instance into a private subnet and install vCenter on it 3. Deploy an EC2 instance into a public subnet and install vCenter on it 4. Use the VMware landing page on AWS to provision a EC2 instance with VMware vCenter installed on it
Purchase VMware Cloud on AWS, leveraging VMware disaster recovery technologies and the speed of AWS Cloud to protect your virtual machines. Customers can buy VMware Cloud on AWS directly through AWS and AWS Partner Network (APN) Partners in the AWS Solution Provider Program. This allows customers the flexibility to purchase VMware Cloud on AWS either through AWS or VMware, or the AWS Solution Provider or VMware VPN Solution Provider of their choice. VMware Cloud on AWS offers a Disaster Recovery feature that uses familiar VMware vSPhere and Site Recovery Manager technologies while leveraging cloud economics. AWS Documentation: https://aws.amazon.com/vmware/faqs/ (Incorrect) Use the VMware landing page on AWS to provision an EC2 instance with VMware vCenter installed on it There is no VMware landing page. There is a more straightforward way to achieve this.
Your company has a multi-account AWS environment with over 100 accounts. Each account belongs to a specific application team within the company, and they all fall within the same consolidated billing family. The company has just received funding for the next two years but is unsure about anything beyond that. With this in mind, they plan to aggressively deploy AWS applications during the two years. Recently, there was a massive spike in unplanned Amazon EC2 and AWS Lambda costs, causing significant financial stress. What can an organization administrator do to maximize savings for the entire organization for this first year? 1. Purchase a one-year All Upfront EC2 Instance Savings Plan. 2. Purchase a three-year All Upfront Compute Savings Plan. 3. Purchase a one-year All Upfront Compute Savings Plan. 4. Purchase a three-year All Upfront EC2 Instance Savings Plan.
Purchase a one-year All Upfront Compute Savings Plan. This Savings Plan covers Amazon EC2 and AWS Lambda function compute costs. It is the most flexible type offered. They can purchase a one-year All Upfront offering to maximize savings for the first year. Reference: https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html Reference: https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-purchase.html
Your application team has been approved to create a new machine-learning application over the next two years. You plan to leverage numerous Amazon SageMaker instances and components to back your application. Your manager is worried about the cost potential of the services involved. How could you maximize your savings opportunities for the Amazon SageMaker service? 1. Purchase a three-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region. 2. Purchase a one-year All Upfront Compute Savings Plan. This applies to all SageMaker instances and components within any AWS Region. 3. Purchase a one-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region. 4. Purchase a three-year All Upfront Compute Savings Plan. This applies to all SageMaker instances and components within any AWS Region.
Purchase a one-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region. SageMaker Savings Plans offer the maximum savings potential for all SageMaker components, and the one-year agreement type falls within the two-year period. Reference: https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html Reference: https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-services.html#sp-sagemaker (Incorrect) Purchase a one-year All Upfront Compute Savings Plan. This applies to all SageMaker instances and components within any AWS Region. These do not cover SageMaker components.
A leading financial institution with a low-risk appetite runs all its UK and Europe operations within AWS. They have been informed that there are concerns about the credibility of their current certificate provider. They have, therefore, decided to replace all SSL certificates obtained from the third-party provider with new certificates from the AWS Certificate Manager. What are the recommended steps for the company to transition between certificate providers? Request all SSL certificates from the existing provider, verify their authenticity, generate new SSL certificates in AWS Certificate Manager, and finally update the configurations across web applications with the new certificates. Generate new SSL certificates in AWS Certificate Manager and gradually replace the certificates on select web applications, ensuring proper functionality before transitioning all certificates to AWS.
Request all SSL certificates from the existing provider, verify their authenticity, generate new SSL certificates in AWS Certificate Manager, and finally update the configurations across web applications with the new certificates. This would be the best approach for the financial institution. It is important to verify the authenticity of the existing certificate to ensure that what is being used at present is secure, as the current provider's credibility is in question. Once the certificates have been verified, new certificates can be requested from the AWS Certificate Manager, and the web application configurations can be updated to complete the transition. (Incorrect) Generate new SSL certificates in AWS Certificate Manager and gradually replace the certificates on select web applications, ensuring proper functionality before transitioning all certificates to AWS. As the financial institution is risk-averse and customer confidence is very important, verifying the current certificates as an initial step would be more beneficialbefore creating new SSL certificates in AWS Certificate Manager. A gradual approach for updating the certificates aligns with the institution being risk-averse. However, a balance needs to be found with how gradual this approach should be due to the credibility of the current SSL provider.
Due to strict compliance requirements, your company cannot leverage AWS cloud to host their Kubernetes clusters or manage the clusters. However, they want to try to follow the established best practices and processes implemented by the Amazon EKS service. How can your company achieve this while running entirely on-premises? 1. Run Amazon EKS. 2. This cannot be done. 3. Run the clusters on-premises using Amazon EKS Distro. 4. Run Amazon ECS anywhere.
Run the clusters on-premises using Amazon EKS Distro. Amazon EKS is based on the EKS Distro, which allows you to leverage the best practices and established processes on-premises that Amazon EKS uses in AWS. Reference: https://distro.eks.amazonaws.com/
You work for an insurance company that uses Redshift to store an extensive customer database and then generates custom reports based on this database. The reports need to be instantly accessible. However, they can be generated at any time so that you can handle redundancy and availability. What S3 storage class should you use to save these reports to keep costs minimal but maintain instant accessibility? S3 One Zone-IA S3 Standard IA S3 Standard S3 Glacier Deep Archive
S3 One Zone-IA This is the most cost-effective storage medium, One Zone-IA balances cost with redundancy, as only one AZ is used for storing the data while still allowing instant access to the data stored. Reference: https://aws.amazon.com/s3/storage-classes/#Performance_across_the_S3_Storage_Classes (Incorrect) S3 Standard IA S3 Standard IA is not the most cost-effective storage medium, as data is stored in a minimum of three separate AZs.
You work for an online retailer where any downtime can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The load balancer does health checks against an HTML file generated by a script when you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem? 1. The EC2 instance has failed the load balancer health check. 2. The instance has not been registered with CloudWatch. 3. The EC2 instance has failed EC2 status checks. 4. You are load testing
The EC2 instance has failed the load balancer health check. The load balancer will route the incoming requests only to the healthy instances. The EC2 instance may have passed status checks and be considered healthy to the Auto Scaling group, but the ELB may not use it if the ELB health check has not been met. The ELB health check has a default of 30 seconds between checks and a default of 3 checks before making a decision. Therefore, the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed. In CloudWatch, where the issue was noticed, it would appear to be a healthy EC2 instance but with no traffic, which is what was observed. https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-healthcheck.html What is one of the clues we got from this question? The Auto Scaling group is configured with the default setting. The default health checks for an Auto Scaling group are EC2 status checks only. If an instance fails these status checks, the Auto Scaling group considers the instance unhealthy and replaces it.
A data company has implemented a subscription service for storing video files. There are two levels of subscription: personal and professional use. The personal users can upload a total of 5 GB of data, and professional users can upload as much as 5 TB of data. The application can upload files of size up to 1 TB to an S3 Bucket. What is the best way to upload files of this size? 1. AWS SnowMobile 2. Single-part Upload 3. Multipart upload 4. AWS Snowball
The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for a specific multipart upload. Each of these operations is explained in this section. https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
You use AWS Route53 as your DNS service and you have updated your domain, hello.acloud.guru, to point to a new Elastic Load Balancer (ELB). However, when you check the update,, users are still redirected to the old ELB. What could be the problem? 1. The A record needs to be changed to a CNAME. 2. Your Application Load Balancer needs to be a Network Load Balancer to interface with Route53. 3. The TTL needs to expire. After that, the record will be updated. 4. The CNAME needs to be changed to an A record.
The TTL needs to expire. After that, the record will be updated. It would be best if you waited for the TTL to expire. Your computer has cached the previous DNS request, but once the TTL has expired, it will get the new address. (Incorrect) The CNAME needs to be changed to an A record. Changing the CNAME to an A record will not achieve anything.
A new startup company decided to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. They create the EC2 instance in the first subnet to host their web application. They finish the configuration by making the application accessible from the Internet. The second subnet has an instance hosting a smaller, secondary application. However, this application is not currently accessible from the Internet. What could be potential problems? 1. The EC2 instance is not attached to an internet gateway. 2. The second subnet does not have a public IP address. 3. The second subnet does not have a route in the route table to the internet gateway. 4. The EC2 instance does not have a public IP address.
The second subnet does not have a route in the route table to the internet gateway. (Incorrect) The EC2 instance does not have a public IP address. Incorrect - The EC2 instance is not attached to an internet gateway. This is not required. A route table needs to have a route to the internet gateway. To enable access to or from the internet for the instances in a subnet in a VPC, you must do the following: Attach an internet gateway to your VPC. You can add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table with a route to an internet gateway, it is known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it is known as a private subnet. Please ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address). Please ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
You currently host a file server on EC2 with the files being stored on EBS. After an outage lasting several hours, you need to design a fault-tolerant architecture so that if the EC2 instance goes down, your customers will still be able to access their files. What is the most fault-tolerant architecture below? 1. An EC2 instance behind a classic load balancer connected to an EBS volume in another region. 2. Two EC2 instances behind an application load balancer and autoscaling group connected to two EBS volumes in separate regions. 3. Lambda Function behind API Gateway with three EBS volumes mounted to the function. 4. Three EC2 instances behind an Application Load Balancer and Autoscaling group, connected to an EFS mount.
Three EC2 instances behind an Application Load Balancer and Autoscaling group connected to an EFS mount. This is the most fault-tolerant solution in the scenario. (Incorrect) Two EC2 instances behind an application load balancer and autoscaling group connected to two EBS volumes in separate regions. This is not technically possible. You cannot have EBS in separate regions.
A small financial technology company has decided to migrate all infrastructure to AWS. They do not have the budget to plan and refactor their infrastructure to fit AWS properly, so they have decided to lift and shift everything for now. The CEO and CTO have required that all on-premises applications and infrastructure be moved into AWS by the end of the year, and they will refactor it later. Which solution is the best fit for their migration needs? Use AWS DMS to migrate all infrastructure to AWS. Then, use the AWS SCT to refactor applications automatically to best use the AWS services. Use AWS DMS to migrate all infrastructure to AWS. Use AWS MGN to automate lifting and shifting all infrastructure to AWS. Use AWS MGN to automate lifting and shifting all infrastructure to AWS and enable the 'Refactor During Migration' option to automatically refactor applications to make the best use of the AWS service.
Use AWS Application Migration Service (MGN) to automate the process of lifting and shifting all infrastructure to AWS. AWS Application Migration Service (MGN) is a service meant to simplify and optimize the lift-and-shift process for migrating existing on-premises infrastructure to the AWS cloud. It will automatically convert and launch your servers into AWS so that you can take advantage of all of the AWS benefits. Reference: https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html (Incorrect) Use AWS DMS to migrate all infrastructure to AWS. Then, use the AWS SCT to refactor applications automatically to best use the AWS services. AWS DMS is a database migration service that is meant for only migrating database instances. The AWS SCT is a schema conversion tool that you can leverage with AWS DMS to convert schemas of existing and future databases.
Your team owns production, staging, and development accounts. The CEO pushed to break down costs to the most detailed level and store daily CSV reports in S3 for ingestion into the company's internal analytics tooling. What would be the most efficient solution for this scenario? 1. Use AWS Cost and Usage Reports to generate reports and export CSV reports daily to a centralized Amazon S3 bucket. 2. Use AWS Budgets to alert and generate reports on current spending, and use AWS Fargate to pull data, generate CSV reports, and then push them to S3. 3. Use AWS Budgets to alert and generate reports, and use AWS Lambda to pull data, generate CSV reports, and then push them to S3. 4. Use AWS Cost and Usage Reports to generate reports with the required detail. Set up Amazon EventBridge (Amazon CloudWatch Events) to trigger a rule to create and then export CSV reports daily to a centralized S3 bucket.
Use AWS Cost and Usage Reports to generate reports and have it export CSV reports daily to a centralized Amazon S3 bucket. AWS Cost and Usage Reports offer the greatest amount of detail for spending reports. They can also be set up to store updated reports in Amazon S3 every 24 hours automatically.
You work for a fintech company that stores its backups on an in-house tape solution at its headquarters. They need to move the solution to the cloud and have a mandatory retention period of seven years due to regulations. They will only access the backups once a year for auditing purposes and can be flexible on how long the access will take. What is the most cost-effective solution? 1. Use AWS Storage Gateway to back the environment to Amazon S3. Create a lifecycle rule to archive the data to S3 Glacier Deep Archive once every seven years. 2. Use AWS Storage Gateway to back the environment to Amazon S3. After you eject the tapes from your backup application, your tapes can be archived to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. Configure the lifecycle rules to keep the backups for the required seven years.
Use AWS Storage Gateway to back the environment up to Amazon S3. After you eject the tapes from your backup application, your tapes can be archived to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. Configure the lifecycle rules to keep the backups for the required seven years. This is the most cost-effective solution. (Incorrect) Use AWS Storage Gateway to back the environment up to Amazon S3. Create a lifecycle rule to archive the data to S3 Glacier Deep Archive once every seven years. Although technically feasible, this is not the most cost-efficient answer.
You have been brought in as a consultant to help a medium-sized company leverage AWS for their application needs. The application currently runs as Docker containers on-premises. However, the technical leaders want to shift to the cloud and leverage AWS as much as possible for all compute management needs. They would rather not use the Kubernetes orchestration service at this time. Which AWS service would you recommend for simplifying the management of the containerized services, causing the least development disruption, while also optimizing for cost? Run on-premises containers using Amazon EKS Anywhere. Migrate cloud-based containers to Amazon EKS. Run Amazon ECS Anywhere for applications running on-premises. Use Amazon ECS Fargate for the cloud application.
Use Amazon ECS Fargate for the cloud application. Amazon ECS Fargate allows the easiest transition to running containers in AWS. It allows them to mimic what is currently running, while also minimizing the operational overhead required to managing and orchestrating containers. (Incorrect) Run Amazon ECS Anywhere for applications running on-premises. This would require the customer not to use AWS and instead leverage their same on-premises hardware. Selected
You have a social media website that uses a DynamoDB table on the backend. Your monitoring detects that the DynamoDB table begins to throttle requests during high peak loads, which causes the slow performance of the website. How can you remedy this? 1. Put the DynamoDB table behind an autoscaling group. 2. Create an Aurora read replica and spread the load between DynamoDB and Aurora. 3. Migrate the database to RDS MySQL and turn on Multi-AZ 4. Use DynamoDB Autoscaling.
Use DynamoDB Autoscaling. This is the best answer. Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html (Incorrect) Create an Aurora read replica and spread the load between DynamoDB and Aurora. This is not technically viable.
Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 50,000 requests per second and will need something that supports their extreme level of message processing. It is also important that each request is processed only one time. What can you do to decouple these resources? 1. Use S3 to store the messages sent between the EC2 instances. 2. Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages. 3. Use SQS FIFO to decouple the applications. 4. Upsize your EC2 instances to reduce the message load on the backend servers.
Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages. This would be a great choice, as SQS Standard can handle this extreme performance level. If the application did not require this level of performance, then SQS FIFO would be the better and easier choice. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-messages.html (Incorrect) Use SQS FIFO to decouple the applications. While this would seem like the correct answer at first glance, it's important to know that SQS FIFO has a batch limit of 3,000 messages per second and cannot handle the extreme level of performance that's required in this situation. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-messages.html
You have an online booking system for vacations that uses EC2 instances on the front end to poll an SQS queue. You noticed that some bookings have been processed twice, meaning the customer paid for their vacation twice. This is causing customer service issues; please fix it as soon as possible. What can you do to stop this from happening again in the future? Replace SQS with Amazon Simple Workflow Service. Use an Amazon SQS FIFO queue instead. Change the message size in SQS. Alter the visibility timeout of SQS.
Use an Amazon SQS FIFO queue instead. Amazon SWF is a service designed for orchestrating workflows. It can be used for more complex workflows, but it is not primarily focused on solving the problem of duplicate message processing. Replacing SQS with SWF is not the most straightforward or efficient solution to prevent duplicates. (Incorrect) Alter the visibility timeout of SQS. The visibility timeout determines how long a message remains invisible in the queue after a worker starts processing it. Changing the visibility timeout can help in some scenarios but will not completely prevent duplicate processing. It is more about giving your workers additional time to process messages without them becoming visible to other workers.
You have a website that uses MongoDB, a NoSQL database, on its backend. It requires high, sequential read and write access to very large data sets on local storage. You need to migrate this database to AWS and host it on EC2 with Amazon EBS. What EC2 instance type would be best suited to handle I/O-intensive database workloads and sequential writes? 1. Use storage-optimized instances with provisioned IOPS SSD volumes 2. Use storage-optimized instances with general-purpose SSD volumes 3. Use compute-optimized instances with general-purpose SSD volumes 4. Use memory-optimized instances with provisioned IOPS SSD volumes
Use storage-optimized instances with provisioned IOPS SSD volumes Storage-optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. Provisioned IOPS SSD volumes are recommended for I/O-intensive database workloads that require sustained IOPS performance. AWS Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html (Incorrect) Use compute-optimized instances with general-purpose SSD volumes Compute-optimized instances are ideal for compute-bound applications that benefit from high-performance processors. A compute-optimized instance with general-purpose SSD volumes would not give you a suitable performance to handle I/O-intensive database workloads and sequential writes.
You have a secure web application hosted on AWS using Application Load Balancers, Auto Scaling, and a fleet of EC2 instances connected to an RDS database. You need to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances (via an authentication token). How can you achieve this? 1. Using IAM roles 2. Using Active Directory federation via Amazon Inspector 3. Using Amazon Cognito 4. Using IAM database authentication
Using IAM database authentication IAM has a database authentication capability that allows an RDS database only to be accessed using the profile credentials specific to your EC2 instances. (Incorrect) Using Active Directory federation via Amazon Inspector This is technically incorrect. You cannot perform Active Directory federation using Amazon Inspector.
You work for a large PR and advertising company in New York City. They have an internal file server that mounts its storage using an Internet Small Computer System Interface (iSCSI) compliant storage device. The file server's iSCSI attached storage device is beginning to run out of storage. They are considering replacing the file server's iSCSI-attached storage device with a cloud-based solution that will continue to support iSCSI. What solution should you recommend? File Gateway S3 Volume Gateway AWS Storage Gateway
Volume Gateway is able to interface with iSCSI and would be a good choice. (Incorrect) File Gateway You need something on-site to interface with iSCSI, so this would not be correct.
When can you change security groups for instances running in AWS? 1. You can change the security groups for an instance when the instance is in the running or stopped state. 2. You cannot change the security groups for an instance when the instance is in the running or stopped state. 3. You cannot change security groups. Create a new instance and attach the desired security groups. 4. You can change the security groups for an instance when the instance is in the pending or stopped state.
You can change the security groups for an instance when the instance is in the running or stopped state. After you launch an instance into a VPC, you can change the security groups that are associated with the instance. You can change the security groups for an instance when the instance is in the running or stopped state. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
A testing team uses a group of EC2 instances to run batch, automated tests on an application. The tests run overnight but do not take all night. The instances sit idle for long periods and accrue unnecessary charges. How can you stop these instances when they are idle for long periods? 1. Write a Python script that queries the instance status. Also, write a Lambda function that can be triggered upon a certain status and stop the instance. 2. Write a cron job that queries the instance status. Also, write a Lambda function that can be triggered upon a certain status and stop the instance. 3. You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 4 hours and stops the instance.
You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 4 hours and stops the instance. Adding Stop Actions to Amazon CloudWatch Alarms: You can create an alarm that stops an Amazon EC2 instance when a certain threshold has been met. For example, you may run development or test instances and occasionally forget to shut them off. You can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use. You can adjust the threshold, duration, and period to suit your needs, plus you can add an SNS notification so that you will receive an email when the alarm is triggered. Amazon EC2 instances that use an Amazon Elastic Block Store volume as the root device can be stopped or terminated, whereas instances that use the instance store as the root device can only be terminated. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html (Incorrect) - Write a cron job that queries the instance status. Also, write a Lambda function that can be triggered upon a certain status and stop the instance. Incorrect: This is creating functionality already provided by CloudWatch.
You need to design a stateless web application tier. Which of the following would NOT help you achieve this? 1. Save your session data on an EBS volume shared by EC2 instances running across different Availability Zones. 2. Store the session data in Elasticache. 3. Store the session data in cookies saved to the users' browsers. 4. Save your session data in Amazon RDS.
You can save your session data on an EBS volume shared by EC2 instances running across different Availability Zones. Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances in the same Availability Zone. This means you cannot have a stateless application with EC2 instances running across different Availability Zones and sharing the same EBS volume. AWS Documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html (Incorrect) Store the session data in cookies saved to the users' browsers. This would enable a stateless application as the session data is saved to the users' browsers. What is a stateless web application tier? Your web tier must be stateless to take advantage of multiple web servers in an automatic scaling configuration. A stateless application needs no knowledge of previous interactions and stores no session information.
A small startup is beginning to configure IAM for their organization. The user logins have been created, and now the focus will shift to the permissions granted to those users. An admin starts creating identity-based policies. To which item can an identity-based policy not be attached? 1. resources 2. roles 3. users 4. groups
resources Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS services that work with IAM. Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html (Incorrect) roles Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he can perform the Amazon EC2 RunInstances action. The policy could further state that John can get items from an Amazon DynamoDB table named MyCompany. You can also allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline.
You have taken over management of several instances in the company AWS environment. You would like to quickly review scripts to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data? 1. user-data/ 2. meta-data/ 3. instance-demographic-data/ 4. instance-data/
user-data/ When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts meta-data/ (Incorrect) You would use user-data/ to configure EC2 instances.
You work for a security company that hosts a secure file storage service on S3. All files uploaded to the S3 buckets must have AES-256 encryption using server-side encryption (SSE-S3). Which of the following request headers must be used? 1. x-enable-server-side-encryption-s3 2. x-amz-server-side-encryption 3. x-amz-server-side-encryption-enable-s3 4. x-enable-server-side-encryption
x-amz-server-side-encryption 'x-amz-server-side-encryption is the correct header to use. Reference: Using Server-Side Encryption` (Incorrect) x-amz-server-side-encryption-enable-s3 There is no such request header. Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
