Whizlabs Practice Test #2

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

A company has a set of Linux based instances on their On-premises infrastructure. They want to have an equivalent block storage device on AWS which can be used to store the same datasets as on the Linux based instances. As an architect, which of the following storage devices would you recommend? Please select : A. AWS EBS B. AWS S3 C. AWS EFS D. AWS DynamoDB

A Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 Instances. EBS Volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS Volumes that are attached to an EC2 Instance are exposed as storage volumes that persist independently from the life of the instance.

A company requires to provision test environments in a short duration. Also required is an ability to tear them down easily for cost optimization. How can this be achieved? Please select : A. Use CloudFormation templates to provision the resources accordingly. B. Use a custom script to create and tear down the resources. C. Use IAM Policies for provisioning the resources and tearing them down accordingly. D. Use Auto Scaling groups to provision the resources on demand.

A AWS CloudFormation provides templates that you can use to create AWS resources and provision them in an orderly and predictable fashion. This can be useful for creating short-lived environments, such as test environments. You can leverage the AWS APIs and AWS CloudFormation to automatically provision and decommission entire environments as you need them. This approach is well suited for development or test environments that run only in defined business hours or periods of time. But you can't handle the test environments by using the Autoscaling group. You can provide the list of resources which requires to build a test environment and it's pretty easy.

An application needs to have a messaging system in AWS. It is of the utmost importance that the order of messages is preserved and duplicate messages are not sent. Which of the following services can help fulfill this requirement? Please select : A. AWS SQS FIFO B. AWS SNS C. AWS Config D. AWS ELB

A One can use SQS FIFO queues for this purpose. Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue. As per AWS, SQS FIFO queues will ensure the delivery of the message only once and it will be delivered in a sequential order. (i.e. First in First Out) where as SNS cannot guarantee the delivery of the message only once. Q: How many times will a subscriber receive each message? Although most of the time each message will be delivered to your application exactly once, the distributed nature of Amazon SNS and transient network conditions could result in occasional, duplicate messages at the subscriber end. Developers should design their applications such that processing a message more than once does not create any errors or inconsistencies.

A company is building a Two-Tier web application to serve dynamic transaction-based content. The Data Tier uses an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable Web Tier? Please select : A. Elastic Load Balancing, Amazon EC2, and Auto Scaling B. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3 C. Amazon RDS with Multi-AZ and Auto Scaling D. Amazon EC2, Amazon Dynamo DB, and Amazon S3

A The question mentions a scalable Web Tier and not a Database Tier. So Option C, D and B can be eliminated since they are database related options.

A company has a lot of data hosted on their On-premises infrastructure. Running out of storage space, the company wants a quick win solution using AWS. Which of the following would allow easy extension of their data infrastructure to AWS? Please select : A. The company could start using Gateway Cached Volumes. B. The company could start using Gateway Stored Volumes. C. The company could start using the Simple Storage Service. D. The company could start using Amazon Glacier.

A Volume Gateways and Cached Volumes can be used to start storing data in S3. You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. Note: The question states that they are running our of storage space and they need a solution to store data with AWS rather than a backup. So for this purpose, gateway cached volumes are appropriate which will help them to avoid scaling their on-premises data center and allows them to store on AWS storage service while having the most recent les available for them at low latency.

A company has a set of EC2 Instances hosted on the AWS Cloud. These instances form a web server farm which services a web application accessed by users on the Internet. Which of the following would help make this architecture more fault tolerant? Choose 2 answers from the options given below. Please select : A. Ensure the instances are placed in separate Availability Zones. B. Ensure the instances are placed in separate regions. C. Use an AWS Load Balancer to distribute the traffic. D. Use Auto Scaling to distribute the traffic.

A and C A load balancer distributes incoming application traffic across multiple EC2 Instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. You can automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down. As the Auto Scaling group adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across all of your EC2 instances. The Elastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances. Your load balancer acts as a single point of contact for all incoming traffic to the instances in your Auto Scaling group. To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group

A company stores its log data in an S3 bucket. There is a current need to have search capabilities available for the data in S3. How can this be achieved in an efficient and ongoing manner? *Choose 2 answers* from the options below. Each answer forms a part of the solution. Please select : A. Use an AWS Lambda function which gets triggered whenever data is added to the S3 bucket. B. Create a Lifecycle Policy for the S3 bucket. C. Load the data into Amazon Elasticsearch. D. Load the data into Glacier.

A and C AWS Elasticsearch provides full search capabilities and can be used for log files stored in the S3 bucket. You can integrate your Amazon ES domain with Amazon S3 and AWS Lambda. Any new data sent to an S3 bucket triggers an event notification to Lambda, which then runs your custom Java or Node.js application code. After your application processes the data, it streams the data to your domain.

Your company currently has a web distribution hosted using the AWS CloudFront service. The IT Security department has confirmed that the application using this web distribution now falls under the scope of PCI compliance. Which of the following steps need to be carried out to ensure that the compliance objectives are met? Choose *two answers* from the choices below. Please select : A. Enable CloudFront access logs. B. Enable Cache in CloudFront. C. Capture requests that are sent to the CloudFront API. D. Enable VPC Flow Logs

A and C If you run PCI or HIPAA-compliant workloads based on the AWS Shared Responsibility Model, we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. To log usage data, you can do the following: Enable CloudFront access logs. Capture requests that are sent to the CloudFront API. Option B helps to reduce latency. Option D - VPC flow logs capture information about the IP traffic going to and from network interfaces in a VPC but not for CloudFront.

You have an application hosted on AWS consisting of EC2 Instances launched via an Auto Scaling Group. You notice that the EC2 Instances are not scaling up on demand. What checks can be done to ensure that the scaling occurs as expected? Please select : A. Ensure that the right metrics are being used to trigger the scale out. B. Ensure that ELB health checks are being used. C. Ensure that the instances are placed across multiple Availability Zones. D. Ensure that the instances are placed across multiple regions.

A. If your scaling events are not based on the right metrics and do not have the right threshold de ned, then the scaling will not occur as you want it to happen.

An EC2 Instance hosts a Java based application that accesses a DynamoDB table. This EC2 Instance is currently serving production users. Which of the following is a secure way for the EC2 Instance to access the DynamoDB table? Please select : A. Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance. B. Use KMS Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. C. Use IAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. D. Use IAM Access Groups with the right permissions to interact with DynamoDB and assign it to the EC2 Instance.

A. To ensure secure access to AWS resources from EC2 Instances, always assign a role to the EC2 Instance. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user. You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. Note: You can attach IAM role to the existing EC2 instance. https://aws.amazon.com/about-aws/whats-new/2017/02/new-attach-an-iam-role-to-your-existing-amazon-ec2- instance/ For more information on IAM Roles, please refer to the below URL: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

You have an application hosted on AWS that writes images to an S3 bucket. The concurrent number of users on the application is expected to reach around 10,000 with approximately 500 reads and writes expected per second. How should the architect maximize Amazon S3 performance? Please select : A. Prefix each object name with a random string. B. Use the STANDARD_IA storage class. C. Prefix each object name with the current data. D. Enable versioning on the S3 bucket.

A. If the request rate is high, you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects. STANDARD_IA storage class is for infrequent data access. Option C is not a good solution. Versioning does not make any difference to the performance in this case.

You have a set of on-premesis virtual machines used to serve a web based application. These are placed behind an on-premesis load balanced solution. You need to ensure that a virtual machine if unhealthy is taken out of rotation. Which of the following would quickly help fulfill this requirement? Please select : A. Use Route 53 health checks to monitor the endpoints. B. Move the solution to AWS and use a Classic Load Balancer. C. Move the solution to AWS and use an Application Load Balancer. D. Move the solution to AWS and use a Network Load Balancer.

A. Route 53 health checks can be used for any endpoint that can be accessed via the Internet. Hence, this would be an ideal option for monitoring endpoints. AWS Documentation mentions the following: You can configure a health check that monitors an endpoint that you specify either by IP address or by domain name. At regular intervals that you specify, Route 53 submits automated requests over the internet to your application,server, or other resource to verify that it's reachable, available and functional. Once enabled, Route 53 automatically configures and manages health checks for individual ELB nodes. Route 53 also takes advantage of the EC2 instance health checking that ELB performs. By combining the results of health checks of your EC2 instances and your ELBs, Route 53 DNS Failover is able to evaluate the health of the load balancer and the health of the application running on the EC2 instances behind it. In other words, if any part of the stack goes down,Route 53 detects the failure and routes traffic away from the failed endpoint. AWS documentation states, that you can create a Route 53 resource record that points to an address outside AWS, you can set up health checks for parts of your application running outside AWS, and you can fail over to any endpoint that you choose, regardless of location. For example, you may have a legacy application running in a datacenter outside AWS and a backup instance of that application running within AWS. You can set up health checks of your legacy application running outside AWS, and if the application fails the health checks, you can fail over automatically to the backup instance in AWS. As per AWS,Route 53 ha s health checkers in locations around the world. When you create a health check that monitors an endpoint, health checkers start to send requests to the endpoint that you specify to determine whether the endpoint is healthy. You can choose which locations you want Route 53 to use, and you can specify the interval between checks: every 10 seconds or every 30 seconds. Note that Route 53 health checkers in different data centers don't coordinate with one another, so you'll sometimes see several requests per second regardless of the interval you chose, followed by a few seconds with no health checks at all. Each health checker evaluates the health of the endpoint base d on two values: Response time Whether the endpoint responds to a number of consecutive health checks that you specify (the failure threshold) Route 53 aggregates the data from the health checkers and determines whether the endpoint is healthy: If more than 18% of health checkers report that an endpoint is healthy, Route 53 considers it healthy. If 18% of health checkers or fewer report that an endpoint is healthy, Route 53 considers it unhealthy. The response time that an individual health checker uses to determine whether an endpoint is healthy depends on the type of health check: HTTP and HTTPS health c hecks, TCP health checks or HTTP and HTTPS health checks with string matching. Regarding your specic query where we are having more than 2 servers for the website, AWS docs states that: When you have more than one resource performing the same function—for example, more than one HTTP server or mail server—you can congure Amazon Route 53 to check the health of your resources and respond to DNS queries using only the healthy resources. For example, suppose your website, example.com, is hosted on six servers, two each in three data centers around the world. You can congure Route 53 to check the health of those servers and to respond to DNS queries for example.com using only the servers that are currently healthy. The configuration details are provided in the second link.

A company currently hosts a Redshift cluster in AWS. For security reasons, it should be ensured that all traffic from and to the Redshift cluster does not go through the Internet. Which of the following features can be used to fulfill this requirement in an efficient manner? Please select : A. Enable Amazon Redshift Enhanced VPC Routing. B. Create a NAT Gateway to route the traffic. C. Create a NAT Instance to route the traffic. D. Create a VPN Connection to ensure traffic does not flow through the Internet.

A. When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.

A company is planning on building an application using the services available on AWS. This application will be stateless in nature, and the service must have the ability to scale according to the demand. Which of the following would be an ideal compute service to use in this scenario? Please select : A. AWS DynamoDB B. AWS Lambda C. AWS S3 D. AWS SQS

B A stateless application is an application that needs no knowledge of previous interactions and stores no session information. Such an example could be an application that, given the same input, provides the same response to any end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g., EC2 instances, AWS Lambda functions). See Cloud Best Practices white paper A, C and D are not Compute services

A company has an entire infrastructure hosted on AWS. It wants to create code templates used to provision the same set of resources in another region in case of a disaster in the primary region. Which of the following services can help in this regard? Please select : A. AWS Beanstalk B. AWS CloudFormation C. AWS CodeBuild D. AWS CodeDeploy

B AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.

A company currently uses Redshift in AWS. The Redshift cluster is required to be used in a cost-effective manner. As an architect, which of the following would you consider to ensure cost-effectiveness? Please select : A. Use Spot Instances for the underlying nodes in the cluster. B. Ensure that unnecessary manual snapshots of the cluster are deleted. C. Ensure VPC Enhanced Routing is enabled. D. Ensure that CloudWatch metrics are disabled.

B Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need. Redshift pricing is based on the following elements. Compute node hours Backup Storage Data transfer - There is no data transfer charge for data transferred to or from Amazon Redshift and Amazon S3 within the same AWS Region. For all other data transfers into and out of Amazon Redshift, you will be billed at standard AWS data transfer rates. Data scanned There is no additional charge for using Enhanced VPC Routing. You might incur additional data transfer charges for certain operations, such as UNLOAD to Amazon S3 in a different region or COPY from Amazon EMR or SSH with public IP addresses. Enhanced VPC routing does not incur any cost but any Unload operation to a different region will incur a cost. With Enhanced VPC routing or with out it any data transfer to a different region does incur cost. But with Storage, increasing your backup retention period or taking additional snapshots increases the backup storage consumed by your data warehouse. There is no additional charge for backup storage up to 100% of your provisioned storage for an active data warehouse cluster. Any amount of storage exceeding this limit does incur cost.

A company has a sales team and each member of this team uploads their sales figures daily. A Solutions Architect needs a durable storage solution for these documents and also a way to prevent users from accidentally deleting important documents. What among the following choices would deliver protection against unintended user actions? Please select : A. Store data in an EBS Volume and create snapshots once a week. B. Store data in an S3 bucket and enable versioning. C. Store data in two S3 buckets in different AWS regions. D. Store data on EC2 Instance storage.

B Amazon S3 has an option for versioning as shown below. Versioning is on the bucket level and can be used to recover prior versions of an object.

An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance's security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance? Please select : A. The Outbound Security Group needs to be modified to allow outbound traffic. B. The Outbound Network ACL needs to be modified to allow outbound traffic. C. Nothing, it can be accessed from any IP address using SSH. D. Both the Outbound Security Group and Outbound Network ACL need to be modified to allow outbound traffic.

B For an EC2 Instance to allow SSH, you can have the below configurations for the Security and Network ACL for Inbound and Outbound Traffic. The reason why Network ACL has to have both an Allow for Inbound and Outbound is because network ACLs are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups, responses are stateful. So if an incoming request is granted, by default an outgoing request will also be granted. Options A and D are invalid because Security Groups are stateful. Here, any traffic allowed in the Inbound rule is allowed in the Outbound rule too. Option C is incorrect.

Currently, you have a NAT Gateway defined for your private instances. You need to make the NAT Gateway highly available. How can this be accomplished? Please select : A. Create another NAT Gateway and place is behind an ELB. B. Create a NAT Gateway in another Availability Zone. C. Create a NAT Gateway in another region. D. Use Auto Scaling groups to scale the NAT Gateway.

B If you have resources in multiple Availability Zones and they share one NAT Gateway, in the event that the NAT Gateway's Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT Gateway in each Availability Zone and configure your routing to ensure that resources use the NAT Gateway in the same Availability Zone.

A mobile based application requires uploading images to S3. As an architect, you do not want to make use of the existing web server to upload the images due to the load that it would incur. How can this be handled? Please select : A. Create a secondary S3 bucket. Then, use an AWS Lambda to sync the contents to the primary bucket. B. Use Pre-Signed URLs instead to upload the images. C. Use ECS Containers to upload the images. D. Upload the images to SQS and then push them to the S3 bucket.

B The S3 bucket owner can create Pre-Signed URLs to upload the images to S3. Option A does not provide a way to upload images to S3. Option C is incorrect. ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Option D is incorrect. SQS is a message queue service used by distributed applications to exchange messages through a polling model and not through a push mechanism. Note: This question is basically based on the scenario where we can use pre-signed url. You need to understand about pre-signed url - which contains the user login credentials particular resources, such as S3 in this scenario. And user must have permission enabled that other application can use the credential to upload the data (images) in S3 buckets. ASW definition: A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permissions to upload that object. All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions. When you create a pre-signed URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration.

A customer wants to create a stream of EBS Volumes in AWS. The data on the volume is required to be encrypted at rest. How can this be achieved? Please select : A. Create an SSL Certificate and attach it to the EBS Volume. B. Use KMS to generate encryption keys which can be used to encrypt the volume. C. Use CloudFront in front of the EBS Volume to encrypt all requests. D. Use EBS Snapshots to encrypt the requests.

B When you create a volume, you have an option to encrypt the volume using keys generated by the Key Management Service. Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating encrypted volumes and any snapshots created from your encrypted volumes. The first time you create an encrypted EBS volume in a region, a default master key is created for you automatically. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html?id=docs_gateway

A company has a set of Hyper-V machines and VMware virtual machines. They are now planning on migrating these instances to the AWS Cloud. Which of the following can be used to move these resources to the AWS Cloud? Please select : A. DB Migration utility B. Use the VM import Tools. C. Use AWS Migration Tools. D. Use AWS Config Tools.

B You can import Windows and Linux VMs that use VMware ESX or Workstation, Microsoft Hyper-V, and Citrix Xen virtualization formats.

A company plans to have their application hosted in AWS. This application has users uploading files and then using a public URL for downloading them at a later stage. Which of the following designs would help fulfill this requirement? Please select : A. Have EBS Volumes hosted on EC2 Instances to store the files. B. Use Amazon S3 to host the files. C. Use Amazon Glacier to host the files since this would be the cheapest storage option. D. Use EBS Snapshots attached to EC2 Instances to store the files.

B - Use Amazon S3 to host the files. If you need storage for the Internet, AWS Simple Storage Service is the best option. Each uploaded file automatically gets a public URL, which can be used to download the file at a later point in time. For more information on Amazon S3, please refer to the below URL: https://aws.amazon.com/s3/ Options A and D are incorrect because EBS Volumes or Snapshots do not have Public URL. Option C is incorrect because Glacier is mainly used for data archiving purposes.

Below are the requirements for a data store in AWS: a) Ability to perform SQL queries b) Integration with existing business intelligence tools c) High concurrency workload that generally involves reading and writing all columns of a small number of records at a time Which of the following would be an ideal data store for the above requirements? *Choose 2 answers* from the options below. Please select : A. AWS Redshift B. AWS RDS C. AWS Aurora D. AWS S3

B and C Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora Multi-Master adds the ability to scale out write performance across multiple Availability Zones, allowing applications to direct read/write workloads to multiple instances in a database cluster and operate with higher availability. Because Amazon Redshift is a SQL-based relational database management system (RDBMS), it is compatible with other RDBMS applications and business intelligence tools. Although Amazon Redshift provides the functionality of a typical RDBMS, including online transaction processing (OLTP) functions, it is not designed for these workloads. If you expect a high concurrency workload that generally involves reading and writing all of the columns for a small number of records at a time you should instead consider using Amazon RDS or Amazon DynamoDB.

You plan on hosting a web application on AWS. You create an EC2 Instance in a public subnet which needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be taken to ensure that a secure setup is in place? Choose 2 answers from the choices below. Please select : A. Place the EC2 Instance with the Oracle database in the same public subnet as the Web server for faster communication. B. Place the EC2 Instance with the Oracle database in a separate private subnet. C. Create a database security group and ensure that the web server's security group allows incoming access. D. Ensure that the database security group allows incoming traffic from 0.0.0.0/0

B and C The best and most secure option is to place the database in a private subnet. Also, you ensure that access is not allowed from all sources but only from the web servers. Option A is incorrect because as per the best practice guidelines, db instances are placed in Private subnets and allowed to communicate with web servers in the public subnet. Option D is incorrect because allowing all incoming traffic from the Internet to the db instance is a security risk.

An application consists of the following architecture: a. EC2 Instances in a single AZ behind an ELB b. A NAT Instance which is used to ensure that instances can download updates from the Internet Which of the following can be used to ensure better fault tolerance in this setup? *Choose 2 answers* from the options given below. Please select: A. Add more instances in the existing Availability Zone. B. Add an Auto Scaling Group to the setup. C. Add more instances in another Availability Zone. D. Add another ELB for more fault tolerance.

B and C Adding Auto Scaling to your application architecture is one way to maximize the benets of the AWS Cloud. When you use Auto Scaling, your applications gain the following benefits: Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Auto Scaling can launch instances in another one to compensate. Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current traffic demands.

A company plans on deploying a batch processing application in AWS. Which of the following is an ideal way to host this application? *Choose 2 answers* from the options below. Each answer forms a part of the solution. Please select : A. Copy the batch processing application to an ECS Container. B. Create a docker image of your batch processing application. C. Deploy the image as an Amazon ECS task. D. Deploy the container behind the ELB.

B and C Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS task.

Your company currently has a set of EC2 Instances hosted in AWS. The states of these instances need to be monitored and each state change needs to be recorded. Which of the following can help fulfill this requirement? Choose 2 answers from the options given below. Please select : A. Use CloudWatch logs to store the state change of the instances. B. Use CloudWatch Events to monitor the state change of the events. C. Use SQS to trigger a record to be added to a DynamoDB table. D. Use AWS Lambda to store a change record in a DynamoDB table.

B and D CloudWatch Events can be used to monitor the state change of EC2 Instances. The Event Source and the Event Type can be chosen (EC2 Instance State-change Notification). An AWS Lambda function can then serve as a target which can then be used to store the record in a DynamoDB table. CloudWatch Events info: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

A company currently has an application hosted on their On-premises environment. The application has a combination of web Instances with worker Instances and Rabbit-MQ for messaging purposes. This infrastructure is now required to be moved to the AWS Cloud. What is the best way to start using messaging on the AWS Cloud? Please select : A. Continue using Rabbit-MQ. Host is on a separate EC2 Instance. B. Make use of AWS SQS to manage the messages. C. Make use of DynamoDB to store the messages. D. Make use of AWS RDS to store the messages.

B. An ideal option would be to make use of AWS Simple Queue Service to manage the messages between the application components. The AWS SQS Service is a highly scalable and durable service. For more information on Amazon SQS, please refer to the below URL: https://aws.amazon.com/sqs/

A company is planning on testing a large set of IoT enabled devices. These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose? Please select : A. Use AWS EMR to store and process the streams. B. Use AWS Kinesis streams to process and analyze the data. C. Use AWS SQS to store the data. D. Use SNS to store the data.

B. AWS Documentation mentions the following on Amazon Kinesis: Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. For more information on Amazon Kinesis, please refer to the below URL: https://aws.amazon.com/kinesis/ Option A: Amazon EMR can be used to process applications with data intensive workloads. Option B: Amazon Kinesis can be used to store, process and analyse real time streaming data. Option C: SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Option D: SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients.

A company has a set of web servers. It is required to ensure that all the logs from these web servers can be analyzed in real time for any sort of threat detection. Which of the following would assist in this regard? Please select: A. Upload all the logs to the SQS Service and then use EC2 Instances to scan the logs. B. Upload the logs to Amazon Kinesis and then analyze the logs accordingly. C. Upload the logs to CloudTrail and then analyze the logs accordingly. D. Upload the logs to Glacier and then analyze the logs accordingly.

B. AWS Documentation provides the following information to support this requirement: Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.

A company wants to have a fully managed data store in AWS. It should be a compatible MySQL database, which is an application requirement. Which of the following databases can be used for this purpose? Please select : A. AWS RDS B. AWS Aurora C. AWS DynamoDB D. AWS Redshift

B. Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. RDS is a generic service to provide Relational Database service which supports 6 database engines. They are Aurora, MySQL, MariaDB, PostgreSQL, Oracle and Microsoft SQL server. Our question is to select MySQL compatible database from the options provided. Out of the options listed Amazon Aurora is a MySQL- and PostgreSQL-compatible enterprise-class database

A company has an application that stores images and thumbnails for images on S3. While the thumbnail images need to be available for download immediately, the images and thumbnails themselves are not accessed that frequently. Which is the most cost-efficient storage option to store images that meet these requirements? Please select : A. Amazon Glacier with Expedited Retrievals. B. Amazon S3 Standard Infrequent Access C. Amazon EFS D. Amazon S3 Standard

B. Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed. It is more cost effective than Option D (Amazon S3 Standard). If you choose Amazon Glacier with Expedited Retrievals, you defeat the whole purpose of the requirement, because of its increased cost.

A company is required to use the AWS RDS service to host a MySQL database. This database is going to be used for production purposes and is expected to experience a high number of read/write activities. Which of the below underlying EBS Volume types would be ideal for this database? Please select : A. General Purpose SSD B. Provisioned IOPS SSD C. Throughput Optimized HDD D. Cold HDD

B. Due to large database workloads

A company with a set of Admin jobs currently setup in the C# programming language, is moving their infrastructure to AWS. Which of the following would be an efficient means of hosting the Admin related jobs in AWS? Please select : A. Use AWS DynamoDB to store the jobs and then run them on demand. B. Use AWS Lambda functions with C# for the Admin jobs. C. Use AWS S3 to store the jobs and then run them on demand. D. Use AWS Config functions with C# for the Admin jobs.

B. The best and most efficient option is to host the jobs using AWS Lambda. This service has the facility to have the code run in the C# programming language. AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration.

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database should only be accessible to web servers in a public subnet. As an architect, which of the following would you not recommend for such an architecture? Please select : A. Create a separate web server and database server security group. B. Ensure the web server security group allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. C. Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. D. Ensure the DB server security group allows MySQL port 3306 inbound and specify the source as the web server security group

C The question is describing a scenario where it has been instructed that the database servers should only be accessible to web servers in the public subnet. You have been asked which one of the following is not a recommended architecture based on the scenario. The answer is option C. Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. Here in this Option C, we are allowing all the incoming traffic from the internet to the database port which is not acceptable as per the architecture. A similar setup is given in AWS Documentation: 1) To ensure that traffic can allow into your web server from anywhere on secure traffic, you need to allow inbound security at 443. 2) You need to then ensure that traffic can flow from the database server to the web server via the database security group. See VPC Scenario #2

A company wants to have a 50 Mbps dedicated connection to its AWS resources. Which of the below services can help fulfill this requirement? Please select: A. Virtual Private Gateway B. Virtual Private Connection C. Direct Connect D. Internet Gateway

C AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

You work for a company that stores records for a minimum of 10 years. Most of these records will never be accessed but must be made available upon request (within a few hours). What is the most cost-effective storage option in this scenario? Choose the correct answer from the options below. Please select : A. Simple Storage Service B. EBS Volumes C. Glacier D. AWS Import/Export

C Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, from a few minutes to several hours.

You are required to host a subscription service in AWS. Users can subscribe to the same and get notifications on new updates to this service. Which of the following services can be used to fulfill this requirement? Please select : A. Use the SQS Service to send the notification. B. Host an EC2 Instance and use the Rabbit-MQ Service to send the noti cation. C. Use the SNS Service to send the notification. D. Use the AWS DynamoDB streams to send the noti cation.

C Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.

You have an EC2 Instance placed inside a subnet. You have created the VPC from scratch, and added the EC2 Instance to the subnet. It is required to ensure that this EC2 Instance has complete access to the Internet, since it will be used by users on the Internet. Which of the following options would help accomplish this? Please select : A. Launch a NAT Gateway and add routes for 0.0.0.0/0 B. Attach a VPC Endpoint and add routes for 0.0.0.0/0 C. Attach an Internet Gateway and add routes for 0.0.0.0/0 D. Deploy NAT Instances in a public subnet and add routes for 0.0.0.0/0

C An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

An application requires a highly available relational database with an initial storage capacity of 8TB. This database will grow by 8GB everyday. To support the expected traffic, at least eight read replicas will be required to handle the database reads. Which of the below options meets these requirements? Please select : A. DynamoDB B. Amazon S3 C. Amazon Aurora D. Amazon Redshift

C Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster. As a result, all Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100 milliseconds after the primary instance has written an update. Replica lag varies depending on the rate of database change. That is, during periods where a large amount of write operations occur for the database, you might see an increase in replica lag. Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, minimal additional work is required to replicate a copy of the data for each Aurora Replica. To increase availability, you can use Aurora Replicas as failover targets. That is, if the primary instance fails, an Aurora Replica is promoted to the primary instance. There is a brief interruption during which read and write requests made to the primary instance fail with an exception, and the Aurora Replicas are rebooted. If your Aurora DB cluster doesn't include any Aurora Replicas, then your DB cluster will be unavailable for the duration it takes your DB instance to recover from the failure event. However, promoting an Aurora Replica is much faster than recreating the primary instance. For high-availability scenarios, we recommend that you create one or more Aurora Replicas. These should be of the same DB instance class as the primary instance and in different Availability Zones for your Aurora DB cluster. For more information on Aurora Replicas as failover targets. You can't create an encrypted Aurora Replica for an unencrypted Aurora DB cluster. You can't create an unencrypted Aurora Replica for an encrypted Aurora DB cluster. In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL: Two Aurora MySQL DB clusters in different AWS Regions, by creating an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region. Two Aurora MySQL DB clusters in the same region, by using MySQL binary log (binlog) replication. An Amazon RDS MySQL DB instance as the master and an Aurora MySQL DB cluster, by creating an Aurora Read Replica of an Amazon RDS MySQL DB instance. Typically, this approach is used for migration to Aurora MySQL, rather than for ongoing replication.

You plan on hosting an application on EC2 Instances which will be used to process logs. The application is not very critical and can resume operation even after an interruption. Which of the following steps can help provide a cost-effective solution? Please select : A. Use Reserved Instances for the underlying EC2 Instances. B. Use Provisioned IOPS for the underlying EBS Volumes. C. Use Spot Instances for the underlying EC2 Instances. D. Use S3 as the underlying data layer.

C One effective solution would be to use Spot Instances in this scenario. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

A company has setup an application in AWS that interacts with DynamoDB. It is required that when an item is modified in a DynamoDB table, an immediate entry is made to the associating application. How can this be accomplished? *Choose 2 answers* from the choices below. Please select : A. Setup CloudWatch to monitor the DynamoDB table for changes. Then trigger a Lambda function to send the changes to the application. B. Setup CloudWatch logs to monitor the DynamoDB table for changes. Then trigger AWS SQS to send the changes to the application. C. Use DynamoDB streams to monitor the changes to the DynamoDB table. D. Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified

C and D When you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement is to have an immediate entry made to an application in case an item in the DynamoDB table is modified, a lambda function is also required. Let us try to analyze this with an example: Consider a mobile gaming app that writes to a GamesScores table. Whenever the top score of the GameScores table is updated, a corresponding stream record is written to the table's stream. This event could then trigger a Lambda function that posts a Congratulatory message on a Social media network handle. DynamoDB streams can be used to monitor the changes to a DynamoDB table. AWS Documentation mentions the following: A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. For more information on DynamoDB streams, please refer to the URL below. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html DynamoDB is integrated with Lambda so that you can create triggers to events in DynamoDB Streams. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement states that an item modified in a DynamoDB table causes an immediate entry to an associating application, a lambda function is also required. For more information on DynamoDB streams Lambda, please refer to the URL below. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

An application currently uses AWS RDS MySQL as its data layer. Due to recent performance issues on the database, it has been decided to separate the querying part of the application by setting up a separate reporting layer. Which of the following additional steps could also potentially assist in improving the performance of the underlying database? Please select : A. Make use of Multi-AZ to setup a secondary database in another Availability Zone. B. Make use of Multi-AZ to setup a secondary database in another region. C. Make use of Read Replicas to setup a secondary read-only database. D. Make use of Read Replicas to setup a secondary read and write database.

C. AWS Documentation mentions the following: Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput For more information on Amazon Read Replicas, please refer to the URL below. https://aws.amazon.com/rds/details/read-replicas/

A company requires a file system which can be used across a set of instances. Which of the following storage options would be ideal for this requirement? Please select : A. AWS S3 B. AWS EBS Volumes C. AWS EFS D. AWS EBS Snapshots

C. Amazon EFS provides scalable le storage for use with Amazon EC2. You can create an EFS le system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances. Option A is incorrect because S3 is not a le system, it is an object-based storage solution. Options B and D are incorrect because EBS Volumes and Snapshots are also not file systems. They are block based storage solutions.

You have instances hosted in a private subnet in a VPC. There is a need for the instances to download updates from the Internet. As an architect, what change would you suggest to the IT Operations team which would also be the most efficient and secure? Please select : A. Create a new public subnet and move the instance to that subnet. B. Create a new EC2 Instance to download the updates separately and then push them to the required instance. C. Use a NAT Gateway to allow the instances in the private subnet to download the updates. D. Create a VPC link to the Internet to allow the instances in the private subnet to download the updates.

C. The NAT Gateway is an ideal option to ensure that instances in the private subnet have the ability to download updates from the Internet. For more information on the NAT Gateway, please refer to the below URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html Option A is not suitable because there may be a security reason for keeping these instances in the private subnet. (for example: db instances) Option B is also incorrect. The instances in the private subnet may be running various applications and db instances. Hence, it is not advisable or practical for an EC2 Instance to download the updates separately and then push them to the required instance. Option D is incorrect because a VPC link is not used to connect to the Internet.

A Solutions Architect is designing an online shopping application running in a VPC on EC2 Instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements? Please select : A. Public subnets for both the application tier and the database cluster B. Public subnets for the application tier, and private subnets for the database cluster C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway

C. We always need to keep Nat gateway on public Subnet only, because it needs to communicate internet. Aws says that To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you've created a NAT gateway, you must update the route table associated with one or more of your private subnets to point. Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet.

You have set up a Redshift cluster in AWS and are trying to access it, but are unable to do so. What should be done so that you can access the Redshift Cluster? Please select : A. Ensure the Cluster is created in the right Availability Zone. B. Ensure the Cluster is created in the right region. C. Change the security groups for the cluster. D. Change the encryption key associated with the cluster.

C. When you provision an Amazon Redshift cluster, it is locked down by default so nobody has access to it. To grant other users inbound access to an Amazon Redshift cluster, you associate the cluster with a security group. For more information on Redshift Security Groups, please refer to the below URL: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html

Your company manages an application that currently allows users to upload images to an S3 bucket. These images are picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area where the metadata for these images can be stored. Which of the following would be an ideal data store for this? Please select : A. AWS Redshift B. AWS Glacier C. AWS DynamoDB D. AWS SQS

C. Option A is incorrect because this is normally used for petabyte based storage. Option B is incorrect because this is used for archive storage. Option D is incorrect because this used for messaging purposes. AWS DynamoDB is the best, light-weight and durable storage option for metadata.

A company has a set of EBS Volumes that need to be catered to in case of a disaster. How can one achieve this in an efficient manner using the existing AWS services? Please select : A. Create a script to copy the EBS Volume to another Availability Zone. B. Create a script to copy the EBS Volume to another region. C. Use EBS Snapshots to create the volumes in another region. D. Use EBS Snapshots to create the volumes in another Availability Zone.

C. Options A and B are incorrect, because you can't directly copy EBS Volumes. Option D is incorrect, because disaster recovery always looks at ensuring resources are created in another region. A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery.

A company is asking its developers to store application logs in an S3 bucket. These logs are only required for a temporary period of time after which, they can be deleted. Which of the following steps can be used to effectively manage this? Please select : A. Create a cron job to detect the stale logs and delete them accordingly. B. Use a bucket policy to manage the deletion. C. Use an IAM Policy to manage the deletion. D. Use S3 Lifecycle Policies to manage the deletion.

D AWS Documentation mentions the following to support the above requirement: Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions - In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf. For more information on S3 Lifecycle Policies, please refer to the URL below. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html A built-in feature exists to do this job, hence Options A, B and C are not necessary.

A company is migrating an on-premises 5TB MySQL database to AWS and expects its database size to increase steadily. Which Amazon RDS engine meets these requirements? Please select : A. MySQL B. Microsoft SQL Server C. Oracle D. Amazon Aurora

D Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag—usually much lesser than 100 milliseconds after the primary instance has written an update.

A company has an application that delivers objects from S3 to users. Of late, some users spread across the globe have been complaining of slow response times. Which of the following additional steps would help in building a cost-effective solution and also help ensure that the users get an optimal response to objects from S3? Please select : A. Use S3 Replication to replicate the objects to regions closest to the users. B. Ensure S3 Transfer Acceleration is enabled to ensure all users get the desired response times. C. Place an ELB in front of S3 to distribute the load across S3. D. Place the S3 bucket behind a CloudFront distribution.

D If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon S3, which will reduce your costs. For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3 and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3. Options A and B are incorrect. S3 Cross-Region Replication and Transfer Acceleration incurs cost. Option C is incorrect. ELB is used to distribute traffic on to EC2 Instances.

A company wants to self-manage a database environment. Which of the following should be adopted to fulfill this requirement? Please select : A. Use the DynamoDB service. B. Provision the database using the AWS RDS service. C. Provision the database using the AWS Aurora service. D. Create an EC2 Instance and install the database service accordingly

D Options A, B and C are fully managed DBs by AWS. If you want to self-manage a database, you should have an EC2 Instance. Then, you will have complete control over the underlying database instance.

An application team needs to quickly provision a development environment consisting of a web and database layer. Which of the following would be the quickest and most ideal way to get this setup in place? Please select : A. Create Spot Instances and install the Web and database components. B. Create Reserved Instances and install the Web and database components. C. Use AWS Lambda to create the web components and AWS RDS for the database layer. D. Use Elastic Beanstalk to quickly provision the environment.

D With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Option A is incorrect. Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts compared to On-Demand prices. Option B is incorrect. A Reserved Instance is a reservation of resources and capacity, for either one or three years, for a particular Availability Zone within a region. Option C is incorrect. AWS Lambda is a compute service that makes it easy for you to build applications that respond quickly to new information and not for provisioning a new environment.

Your company has a set of EC2 Instances hosted in AWS. There is a mandate to prepare for disasters and come up with the necessary disaster recovery procedures. Which of the following would help in mitigating the effects of a disaster for the EC2 Instances? Please select : A. Place an ELB in front of the EC2 Instances. B. Use Auto Scaling to ensure the minimum number of instances are always running. C. Use CloudFront in front of the EC2 Instances. D. Use AMIs to recreate the EC2 Instances in another region.

D You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be created from the AMI. Options A and B are good for fault tolerance, but cannot help completely in disaster recovery for the EC2 Instances. Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is hosted on the EC2 Instance. For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, options A,B and C are not feasible solutions.

An architecture consists of the following: a) A primary and secondary infrastructure hosted in AWS b) Both infrastructures comprise ELB, Auto Scaling and EC2 resources How should Route 53 be configured to ensure proper failover in case the primary infrastructure were to go down? Please select : A. Configure a primary routing policy. B. Configure a weighted routing policy. C. Configure a Multi-Answer routing policy. D. Configure a failover routing policy.

D You can create an active-passive failover configuration by using failover records. Create a primary and a secondary failover record that have the same name and type, and associate a health check with each. The various Route 53 routing policies are as follows: Simple routing policy - Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website. Failover routing policy - Use when you want to configure active-passive failover. Geolocation routing policy - Use when you want to route traffic based on the location of your users. Geoproximity routing policy - Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another. Latency routing policy - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. Multivalue answer routing policy - Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random. Weighted routing policy - Use to route traffic to multiple resources in proportions that you specify.

An application running on EC2 Instances processes sensitive information stored on Amazon S3. This information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 could be a security risk. Which solution will resolve the security concern? Please select : A. Access the data through an Internet Gateway. B. Access the data through a VPN connection. C. Access the data through a NAT Gateway. D. Access the data through a VPC endpoint for Amazon S3.

D. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Option A is incorrect. An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. Option B is incorrect. A VPN, or Virtual Private Network, allows you to create a secure connection to another network over the Internet. Option C is incorrect. You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the internet from initiating a connection with those instances.

A company planning on building and deploying a web application on AWS, needs to have a data store to store session data. Which of the below services can be used to meet this requirement? Please select : A. AWS RDS B. AWS SQS C. AWS ELB D. AWS ElastiCache

D. AWS Documentation mentions the following: Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps. For more information on ElastiCache, please refer to the URL below. https://aws.amazon.com/elasticache/ Option A is incorrect. RDS is a distributed relational database. It is a web service running in the cloud designed to simplify the setup, operation, and scaling of a relational database for use in applications. Option B is incorrect. SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Option C is incorrect. ELB is Elastic Load Balancer which automatically distributes incoming application traffic across multiple targets. Note: In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. In-memory caching improves application performance by storing frequently accessed data items in memory, so that they can be retrieved without access to the primary data store. Properly leveraging caching can result in an application that not only performs better, but also costs less at scale. Amazon ElastiCache is a managed service that reduces the administrative burden of deploying an in-memory cache in the cloud. Please refer the following white paper for more information. https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf

You currently have the following architecture in AWS: a. A couple of EC2 Instances located in us-west-2a b. The EC2 Instances are launched via an Auto Scaling group. c. The EC2 Instances sit behind a Classic ELB. Which of the following additional steps should be taken to ensure the above architecture conforms to a well architected framework? Please select : A. Convert the Classic ELB to an Application ELB. B. Add an additional Auto Scaling Group. C. Add additional EC2 Instances to us-west-2a. D. Add or spread existing instances across multiple Availability Zones.

D. Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet.

A company has opted to store their cold data on EBS Volumes. Ensuring optimal cost, which of the following would be the ideal EBS Volume type to host this type of data? Please select : A. General Purpose SSD B. Provisioned IOPS SSD C. Throughput Optimized HDD D. Cold HDD

D. Information on EBS Volume Types https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

A company has a set of resources hosted in an AWS VPC. Having acquired another company with its own set of resources hosted in AWS, it is required to ensure that resources in the VPC of the parent company can access the resources in the VPC of the child company. How can this be accomplished? Please select: A.Establish a NAT Instance to establish communication across VPCs. B. Establish a NAT Gateway to establish communication across VPCs. C. Use a VPN Connection to peer both VPCs. D. Use VPC Peering to peer both VPCs.

D. A VPC Peering Connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering Connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS region.

Your company has a set of resources hosted on the AWS Cloud. As a part of the new governing model, there is a requirement that all activity on AWS resources should be monitored. What is the most efficient way to have this implemented? Please select : A. Use VPC Flow Logs to monitor all activity in your VPC. B. Use AWS Trusted Advisor to monitor all of your AWS resources. C. Use AWS Inspector to inspect all of the resources in your account. D. Use AWS CloudTrail to monitor all API activity.

D. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of trails you create, and control how users view CloudTrail events.

You have a web application hosted on an EC2 Instance in AWS which is being accessed by users across the globe. The Operations team has been receiving support requests about extreme slowness from users in some regions. What can be done to the architecture to improve the response time for these users? Please select : A. Add more EC2 Instances to support the load. B. Change the Instance type to a higher instance type. C. Add Route 53 health checks to improve the performance. D. Place the EC2 Instance behind CloudFront.

D. Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. Option A and B is incorrect. The latency issue is experienced by people from certain parts of the world only. So, increasing the number of EC2 Instances or increasing the instance size does not make much of a difference. Option C is incorrect. Route 53 health checks are meant to see whether the instance status is healthy or not. Since this case deals with responding to requests from users, we do not have to worry about this. However, for improving latency issues, CloudFront is a good solution.

A company has a requirement to store 100TB of data to AWS. This data will be exported using AWS Snowball and needs to then reside in a database layer. The database should have the facility to be queried from a business intelligence application. Each item is roughly 500KB in size. Which of the following is an ideal storage mechanism for the underlying data layer? Please select : A. AWS DynamoDB B. AWS Aurora C. AWS RDS D. AWS Redshift

D. For this sheer data size, the ideal storage unit would be AWS Redshift. AWS Documentation mentions the following on AWS Redshift: Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift o ers fast query performance using the same SQL-based tools and business intelligence applications that you use today. For more information on AWS Redshift, please refer to the URL below. https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html Option A is incorrect because the maximum item size in DynamoDB is 400KB Option B is incorrect because Aurora supports 64TB of data. Option C is incorrect because we can create MySQL, MariaDB, SQL Server, PostgreSQL, and Oracle RDS DB instances with up to 16 TiB of storage. The correct answer is: AWS Redshift


Kaugnay na mga set ng pag-aaral

Globalization and Diversity - Chapter 12

View Set

2ND Master Spanish Unit Test Review

View Set

Nursing Application: Diabetes Drugs

View Set

Cattle - Nutrition and trace elements

View Set

ACCT 3313 Chapter 7 Cash & Receivables

View Set

(NH1) Unité 2--Explore le monde francophone--Visitons la France!, Visitons Monaco! et Visitons la Côte d'Ivoire!

View Set

Phys 203 - Second Examination Lecture Notes

View Set