AWS - Certified Solutions Architect - Associates (SAA-C01) / Multiple Choice

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

*You have been instructed to establish a successful site-to-site VPN connection from your on-premises network to the VPC (Virtual Private Cloud). As an architect, which of the following pre-requisites should you ensure are in place for establishing the site-to-site VPN connection.* (Choose 2) * The main route table to route traffic through a NAT instance * A public IP address on the customer gateway for the on-premises network * A virtual private gateway attached to the VPC * An Elastic IP address to the Virtual Private Gateway

*A public IP address on the customer gateway for the on-premises network* *A virtual private gateway attached to the VPC* A private gateway is the VPN concentrator on the Amazon side of the VPN connection. You create a virtual private gateway and attach it to the VPC from which you want to create the VPN connection. A customer gateway is a physical device or software application on your side of the VPN connection. ----------------------------------- Option A is incorrect since NAT instance is not required to route traffic via the VPN connection. Option D is incorrect the Virtual Private Gateway is managed by AWS.

*Your company is planning on deploying an application which will consist of a web and database tier. The database tier should not be accessible from the Internet. How would you design the networking part of the application?* (Choose 2) * A public subnet for the web tier * A private subnet for the web tier * A public subnet for the database tier * A private subnet for the database tier

*A public subnet for the web tier* *A private subnet for the database tier* ----------------------------------- Option B is incorrect since users will not be able to access the web application if it placed in a private subnet. Option C is incorrect since the question mentions that the database should not be accessible from the internet.

*A company has a requirement for archival of 6TB of data. There is an agreement with the stakeholders for an 8-hour agreed retrieval time. Which of the following can be used as the MOST cost-effective storage option?* * AWS S3 Standard * AWS S3 Infrequent Access * AWS Glacier * AWS EBS Volumes

*AWS Glacier* Amazon Glacier is the perfect solution for this. Since the agreed time frame for retrieval is met at 8 hours, this will be the most cost effective option.

*Using which AWS orchestration can you implement Puppet recipes?* * AWS OpsWorks * AWS Elastic Beanstalk * AWS Elastic Load Balancing * Amazon Redshift

*AWS OpsWorks* AWS OpsWork is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. ------------------------- AWS Elastic Beanstalk is a service used for deploying infrastructure that orchestrates various AWS services, including EC2, S3, Simple Notification Service, and so on. Elastic Load Balancing is used to balance the load across multiple EC2 servers, and Amazon Redshift is the data warehouse offering of AWS.

*An application needs to have a messaging system in AWS. It is the utmost importance that the order of the messages is preserved and duplicate messages are not sent. Which of the following services can help fulfill this requirement?* * AWS SQS FIFO * AWS SNS * AWS Config * AWS ELB

*AWS SQS FIFO* Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue. *Note:* As per AWS, SQS FIFO queues will ensure delivery of the message only once and it will delivered in a sequential order. (i.e. First in First Out) where as SNS cannot guarantee the delivery of the message only once. As per AWS SNS FAQ. *Q: How many times will a subscriber receive each message?* Although most of the time each message will be delivered to your application exactly once, the *distributed nature of Amazon SNS and transient network conditions could result in occasional, duplicate messages at the subscriber end.* Developers should design their applications such that processing a message more than once does not create any errors or inconsistencies. FIFO FQs state that *High Throughput:* By default, FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second). When you batch 10 messages per operation (maximum), FIFO queues can support up to 3,000 messages per second. To request a limit increase, file a support request. *Exactly-Once Processing:* A message is delivered once and remains available until a consumer processes and deletes it. Duplicates aren't introduced into the queue. *First-In-First-Out Delivery:* The order in which messages are sent and received strictly preserved (i.e. First-In-First-Out). Using SQS FIFO queues will satisfy both the requirements stated in the question. i.e. Duplication of message will not occur and the order of the messages will be preserved.

*What level of access does the "root" account have?* * No Access * Read Only Access * Power User Access * Administrator Access

*Administrator Access*

*You have a web server and an app server running. You often reboot your app server for maintenance activities. Every time you reboot the app server, you need to update the connect string for the web server since the IP address of the app server changes. How do you fix this issue?* * Allocate an IPv6 IP address to the app server * Allocate an Elastic Network Interface to the app server * Allocate an elastic IP address to the app server * Run a script to change the connection

*Allocate an elastic IP address to the app server* Allocating an IPv6 IP address won't be of any use because whenever the server comes back, it is going to get assigned another new IPv6 IP address. Also, if your VPC doesn't support IPv6 and if you did not select the IPv6 option while creating the instance, you may not be able to allocate one. The Elastic Network Interface helps you add multiple network interfaces but won't get you a static IP address. You can run a script to change the connection, but unfortunately you have to run it every time you are done with any maintenance activities. You can even automate the running of the script, but why add so much complexity when you can solve the problem simply by allocating an EIP?

*An application needs to have a Data store hosted in AWS. The following requirements are in place for the Data store: a) An initial storage capacity of 8 TB b) The ability to accomodate a database growth of 8 GB per day c) The ability to have 4 Read Replicas Which of the following Data stores would you choose for this requirements?* * DynamoDB * AmazonS3 * Amazon Aurora * SQL Server

*Amazon Aurora* Aurora can have a storage limit of 64TB and can easily accommodate the initial 8TB plus a database growth 8GB/day for nearly a period of 20+ years. It can have up to 15 Aurora Replicas that can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, no additional work is required to replicate a copy of the data for each Aurora Replica. *Note:* Our db choice need to fulfill 3 criterias. 1. Initial Storage capacity 8 TB 2. Daily db growth of 8GB/day 3. Need 4 Read replicas DynamoDB, along side DynamoDB Accelerator *(DAX)* can support up to 9 read replicas in its primary cluster. However we have to choose the best suitable one from the options listed in the question. We have Aurora also listed under the option which is fully dedicated for read operation sin the cluster. *NOTE:* Yes, the first line of the question has not mentioned anything about the database, but the requirements have a mention of it, and also you were asked about read replicas. ALso, would like to inform you that in real time exam, Amazon asks these type of questions to check your understanding under stress, hence we do try to replicating them to for you to get prepared for the exam. *DynamoDB also fulfills all 3 criteria mentioned above. But when we think about the "Read replicas", Aurora is fully dedicated for read operations in the cluster.* For this question, we have to choose only one option. So Aurora is the best Option here.

*You are deploying an application on Amazon EC2, which must call AWS APIs. What method should you use to securely pass credentials to the application?* * Pass API credentials to the instance using Instance userdata. * Store API credentials as an object in Amazon S3. * Embed the API credentials into your application. * Assign IAM roles to the EC2 Instances.

*Assign IAM roles to the EC2 Instances* You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. It is not a good practice to use IAM credentials for a production based application. A good practice however, is to use IAM Roles.

*When you create an Auto Scaling mechanism for a server, which two things are mandatory?* (Choose two.) * Elastic Load Balancing * Auto Scaling group * DNS resolution * Launch configuration

*Auto Scaling group* *Launch configuration* The launch configuration and the Auto Scaling group are mandatory.

*You are using multiple-language web sites in AWS, which is served via CloudFront. The language is specified an HTTP request, as shown here: http://abc111xyz.cloudfront.net/main.html?language=en http://abc111xyz.cloudfront.net/main.html?language=es http://abc111xyz.cloudfront.net/main.html?language=fr How should you configure CloudFront to deliver cached data in the correct language?* * Serve dynamic content. * Cache an object at the origin. * Forward cookies to the origin. * Base it on the query string parameter.

*Base it on the query string parameter.* In this example, if the main page for your web site is main.html, the following three requests will cause CloudFront to cache main.html three times, once for each value of the language query string parameter. ----------------------------------- If you serve dynamic content or a cache object at the origin or forward cookies to the origin, then you won't provide the best user experience to your end user.

*What is the range of CIDR blocks that can be used inside a VPC?* * Between /16 and /28 * Between /16 and /30 * Between /14 and /24 * Between /18 and /24

*Between /16 and /28* The range of CIDR blocks in AWS is between /16 and /28. ------------------------- The allowed block size is between a /16 netmask (65,536 IP addresses) and a /28 netmask (16 IP addresses).

*You are a solutions architect who works with a large digital media company. The company has decided that they want to operate within the Japanese region and they need a bucket called "testbucket" set up immediately to test their web application on. You log in to the AWS console and try to create this bucket in the Japanese region however you are told that the bucket name is already taken. What should you do to resolve this?* * Bucket names are global, not regional. This is a popular bucket name and is already taken. You should choose another bucket name. * Run a WHO IS request on the bucket name and get the registered owners email address. Contact the owner and ask if you can purchase the rights to the bucket. * Raise a ticket with AWS and ask them to release the name "testbucket" to you. * Change your region to Korea and then create the bucket "testbucket".

*Bucket names are global, not regional. * This is a popular bucket name and is already taken. You should choose another bucket name.

*You want to explicitly "deny" certain traffic to the instance running in your VPC. How do you achieve this?* * By using a security group * By adding an entry in the route table * By putting the instance in the private subnet * By using a network access control list

*By using a network access control list* By using a security group, you can allow and disallow certain traffic, but you can't explicitly deny traffic since the deny option does not exist for security groups. There is no option for denying particular traffic via a route table. By putting an instance in the private subnet, you are just removing the Internet accessibility of this instance, which is not going to deny any particular traffic.

*You are running a sensitive application in the cloud, and you want to deny everyone access to the EC2 server hosting the application except a few authorized people. How can you do this?* * By using a security group * By removing the Internet gateway from the VPC * By removing all the entries from the route table * By using a network access control list

*By using a network access control list* Only via the the network access control list can you explicitly deny certain traffic to the instance running in your VPC. ------------------------- Using a security group you can't explicitly deny. By removing the Internet gateway from the VPC, you will stop the Internet access from the VPC, but that is not going to deny everyone access to the EC2 server running inside the VPC. If you remove all the routes from the route table, then no one will be able to connect to the instance.

*You want EC2 instances to give access without any username or password to S3 buckets. What is the easiest way of doing this?* * By using a VPC S3 endpoint * By using a signed URL * By using roles * By sharing the keys between S3 and EC2

*By using roles* A VPC endpoint is going to create a path between the EC2 instance and the Amazon S3 bucket. A signed URL won't help EC2 instances from accessing S3 buckets. You cannot share the keys between S3 and EC2.

*What are the two configuration management services that AWS OpsWorks support?* (Choose 2) * Chef * Ansible * Puppet * Java

*Chef* *Puppet* AWS OpsWorks support Chef and Puppet.

*There is a requirement for an iSCSI device and the legacy application needs local storage. Which of the following can be used to meet the demands of the application?* * Configure the Simple Storage Service. * Configure Storage Gateway Cached volume. * Configure Storage Gateway Stored volume. * Configure Amazon Glacier.

*Configure Storage Gateway Stored Volume.* If you need low-latency access to your entire dataset, first configure you on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. S3 and Glacier are not used for this purpose. Volume gateway provides an iSCSI target, which enables you to create volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The volume gateway runs in either a cached or stored mode. * In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cached for low-latency access. * In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.

*A company has an on-premises infrastructure which they want to extend to the AWS Cloud. There is a need to esure that the communication across both environments is possible over the Internet when initiated from on-premises. What would you create in this case to fulfill this requirement?* * Create a VPC peering connection between the on-premises and AWS Environment. * Create an AWS Direct connection between the on-premises and the AWS Environment. * Create a VPN connection between the on-premises and AWS Environment. * Create a Virtual private gateway connection between the on-premises and the AWS Environment.

*Create a VPN connection between the on-premises and the AWS Environment.* One can create a Virtual Private connection to establish communication across both environments over the Internet. A VPC VPN Connection utilizes IPSec to establish encrypte network connectivity between your intranet and Amazon VPC over the Internet. VPN Connection can be configured in minutes and are a good soultion if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. ----------------------------------- Option A is invalid because a VPC peering connection, is a networking connectiong between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is not used for connection between on-premises environment and AWS. Option D is invalid because, A virtual private gateway is the Amazon VPC side of the VPN connection. For the communication to take place between the on-premise servers to AWS EC2 instances with the VPC, we need to setup the customer gateway at the on-premise location. *Note:* The question says that "There is a need to ensure that the communication across both environments is possible *over the Internet". AWS Direct Connect does not involve the Internet, instead; it uses dedicated, private network connection between your intranet and Amazon VPC.

*A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a default VPC private subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database must only be accessible to web servers in a public subnet. Which solution meets these requirements without other running applications?* (Choose 2) * Create a network ACL on the Web Server's subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0 * Create a Web Server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the Web Servers. * Create a DB Server security group that allows MySQL port 3306 inbound and specify the source as the Web Server security group. * Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for Web Servers and deny all outbound traffic. * Create a DB Server security groups that allows HTTPS port 443 inbound and specify the source as a Web Server security group.

*Create a Web Server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the Web Servers.* *Create a DB Server security group that allows MySQL port 3306 inbound and specify the source as the Web Server security group.* 1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443. 2) And then, you need to ensure that traffic can flow from the database server to the web server via the database security group. ----------------------------------- Option A and D are invalid answers. Network ACL's are stateless. So we need to set rules for both inbound and outbound traffic for network ACL's. Option E is also invalid because to communicate with the MySQL servers we need to allow traffic to flow through port 3306.

*Your company is looking at decreasing the amount of time it takes to build servers which are deployed as EC2 instances. These intances always have the same type of software installed as per the security standards. As an architect what would you recommend in decreasing the server build time.* * Look at creating snapshots of EBS Volumes * Create the same master copy of the EBS volume * Create a base AMI * Create a base profile

*Create a base AMI* An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify a source AMI when you launch an instance. You can launch multiple instances from a single AMi when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations. ----------------------------------- Option A and B are incorrect since these cannot be used to create a master copy of the instance. Option D is incorrect because creating a profile will not assist.

*You are working as a consultant for a start-up firm. They have developed a web application for employee to enable them file sharing with external vendors securely. They created an Auto Scaling group for Web servers which require two m4, Large EC2 instances running at all time & scaling up to maximum twelve instances. Post deploying this application, huge rise in billing is observed. Due to limited budget, CTO has requested your advice to optimise usage of instance in Auto Scaling groups. What will be the best solution to reduce cost without any performance impact?* *Create an Auto Scaling group within t2. micro On-Demand instances. * Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 0. Above On-Demand base, select 100% of On-Demand instance & 0% of Spot Instance. * Create an Auto Scaling group with all Spot Instance. * Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 2. Above On-Demand base, select 20% of On-Demand instances & 80% of Spot Instance.

*Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 2. Above On-Demand base, select 20% of On-Demand instance & 80% of Spot Instance.* Auto Scaling group supports a mix of On-Demand & Spot instance which help to design cost optimised solution without any performance impact. You can choose percentage of On-Demand & Spot instance based upon application requirements. With Option D, Auto Scaling group will have initial 2 instance as On-Demand instance while remaining instance will be launched in a ratio of 20% On-Demand instance & 80% Spot Instance. ----------------------------------- Option A is incorrect as although with t2. micro, there would be reduction in cost, but it will impact performance of application. Option B is incorrect as there would not be any cost reduction with all On Demand instance. Option C is incorrect as although wthis will reduce cost, all spot instance in an auto scaling group may cause inconsistencies in application & lead to poor performance.

*You are running an application on EC2 instances, and you want to add a new functionality to your application. To add the functionality, your EC2 instance needs to write data in an S3 bucket. Your EC2 instance is already running, and you can't stop/reboot/terminate it to add the new functionality. How will you achieve this?* (Choose two.) * Create a new user who has access to EC2 and S3. * Launch a new EC2 instance with an IAM role that can access the S3 bucket. * Create an IAM role that allows write access to S3 buckets. * Attach the IAM role that allows write access to S3 buckets to the running EC2 instance.

*Create an IAM role that allows write access to S3 buckets.* *Attach the IAM role that allows write access to S3 buckets to the running EC2 instance.* ------------------------- Since the application needs to write to an S3 bucket, if you create a new user who has access to EC2 and S3, it won't help. You can't launch new EC2 servers because the question clearly says that you can't stop or terminate the existing EC2 server.

*Which instance type runs on hardware allocated to a single customer?* * EC2 spot instance * Dedicated instance * Reserved instance * On-demand instance

*Dedicated instance* A dedicated instance runs on hardware allocated to a single customer. The dedicated instances are physically isolated at the host hardware level from instances that belon gto other AWS accounts. ------------------------- Spot, reserved, on-demand instances are differenct pricing models for EC2 instances.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The web application interfaces with a AWS RDS database. It has been noticed that a set of similar types of queries is causing a performance bottleneck at the database layer. Which of the following architecture additions can help alleviate this issue?* * Deploy ElastiCache in front of the web servers. * Deploy ElastiCache in front of the database servers. * Deploy Elastic Load balancer in front of the web servers. * Enable Multi-AZ for the database.

*Deploy ElastiCache in front of the database servers.* Amazon Elasticache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. ----------------------------------- Option A is incorrect since the database is having issue hence you need to ensure that ElastiCache is placed in front of the database servers. Option C is incorrect since there is an issue with the database servers, so we don't need to add anything for the web servers. Option D is incorrect since this is used for high availability of the database.

*You have a business-critical two-tier web application currently deployed in 2 Availability Zones in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes offline and if the Auto Scaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this?* * Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 50% peak load per region. * Deploy in 3 AZ with Auto Scaling minimum set to handle 33 percent peak load per zone. * Deploy in 3 AZ with Auto Scaling minimum set to handle 50 percent peak load per zone. * Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 100% peak load per region.

*Deploy in 3 AZ with Auto Scaling minimum set to handle 50 percent peak load per zone.* Since the requirement states that the application should never go down even if an AZ is not available, we need to maintain 100% availability. ----------------------------------- Option A and D are incorrect because region deployment is not possible for ELB. ELBs can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%. *Note:* In the question, it clearly mentioned that 'The application needs to remain fully available even if one application AZ goes offline and if Auto Scaling cannot launch new instances in the remaining AZ.' Hence you need to maintain 100% availability. In Option B, when you create 3 AZ, with minimum of 33% load on each, if any failure occurs then in one AZ then 33% + 33% = 66%. Here you can only handle 66% and remaining 34% of load not handling. But when you select option C, when you create 3 AZs with a minimum of 50% load on each. If any failure occurs in one AZ then 50% + 50% = 100%. Here you can handle full load i.e. 100%

*One of your users is trying to upload a 7.5GB file to S3. However, they keep getting the following error message: "Your proposed upload exceeds the maximum allowed object size.". What solution to this problem does AWS recommend?* * Design your application to use the Multipart Upload API for all objects. * Raise a ticket with AWS to increase your maximum object size. * Design your application to use large object upload API for this object. * Log in to the S3 console, click on the bucket and then click properties. You can then increase your maximum object size to 1TB.

*Design your application to use the Multipart Upload API for all objects.*

*You have created a VPC with the CIDR block 10.0.0.0/16 and have created a public subnet and private subnet, 10.0.0.0/24 and 10.0.0.0/24, respectively, within it. Which entries should be present in the main route table to allow the instances in VPC to communicate with each other?* * Destination: 10.0.0.0/0 and Target ALL * Destination: 10.0.0.0/16 and Target ALL * Destination: 10.0.0.0/24 and Target VPC * Destination: 10.0.0.0/16 and Target Local

*Destination: 10.0.0.0/16 and target Local* By specifying Target Local, you can allow the instances in VPC to communicate to each other. ----------------------------------- Target ALL and Target VPC are not valid options.

*Which statement best describes Availability Zones?* * Two zones containing compute resources that are designed to automatically maintain synchronized copies of each other's data. * Distinct locations from within an AWS region that are engineered to be isolated from failures. * A Content Distribution Network used to distribute content to users. * Restricted areas designed specifically for the creation of Virtual Private Clouds.

*Distinct locations from within an AWS region that are engineered to be isolated from failures.* An Availability Zone (AZ) location within an AWS Region. ----------------------------------- Each Region comprises at least two AZs.

*A team is building an application that must persist and index JSON data in a highly available data store. Latency of data access must remain consistent despite very high application traffic. What service should the team choose for the above requirement?* * Amazon EFS * Amazon Redshift * DynamoDB * AWS CloudFormation

*DynamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The data in DynamoDB is stored in JSON format, and hence is the perfect data store for the requirement in question.

*A company planning on building and deploying a web application on AWS, need to have a data store to store session data. Which of the below services can be used to meet this requirement?* (Choose 2) * AWS RDS * AWS SQS * DynamoDB * AWS ElastiCache

*DynamoDB* *AWS ElastiCache* Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps. Consider only storing a unique session identifier in an HTTP cookie and storing more detailed user session information on the server side. Most programming platforms provide a native session management mechanism that works this way. However, user session information is often stored on the local file system by default and results in a stateful architecture. A common solution to this problem is to store this information in a database. Amazon DynamoDB is a great choice because of its scalability, high availability, and durability characteristics. For many platforms, there are open source drop-in replacement libraries that allow you to store native sessions in Amazon DynamoDB. *Note:* In order to address scalability and to provide a shared data storage sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store as Redis and Memcached. *In-memory caching improves application performance by storing frequently accessed data items in memory, so that they can be retrieved without access to the primary data store.* Properly leveraging caching can result in an application that not only performs better, but also costs less at scale. Amazon ElastiCache is a managed service that reduces the administrative burden of deploying an in-memory cache in the cloud. ----------------------------------- Option A is incorrect, RDS is is distributed relational database. It is a web service running "in the cloud" designed to simplify the setup, operation, and scaling of a relational database for use in applications. Option B is incorrect, SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications.

*Your company is planning on hosting an e-commerce application on the AWS Cloud. There is a requirement for sessions to be always maintained for users. Which of the following can be used for storing session data?* (Choose 2) * CloudWatch * DynamoDB * Elastic Load Balancing * ElastiCache * Storage Gateway

*DynamoDB* *ElastiCache* DynamoDB and ElastiCache are perfect options for storing session data. Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution while removing the complexity associated with deploying and managing a distributed cache environment.

*An application is currently hosted on an EC2 Instance which has attached EBS Volumes. The data on these volumes are accessed for a week and after that, the documents need to be moved to infrequent access storage. Which of the following EBS volume type provide cost efficiency for the moved documents?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Cold HDD* Cold HDD (sc1) volume provide low-cost magnetic storage that defines performance in terms of thoughput rather than IOPS. With a lower throughput limit than st1, sc1 is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage.

*There is a requirement to host a database on an EC Instance. It is also required that the EBS volume should support 18,000 IOPS. Which Amazon EBS volume type meets the performance requirements of this database?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Provisioned IOPS SSD* For high performance and high IOPS requirements as in this case, the ideal choice would be to opt for EBS Provisioned IOPS SSD.

*There is a requirement to host a database application having resource-intensive reads and writes. Which of the below options is most suitable?* * EBS Provisioned IOPS SSD * EBS SSD * EBS Throughput Optimized HDD * EBS Cold Storage

*EBS Provisioned IOPS SSD* Since there is a requirement for high performance with high IOPS, one needs to opt for EBS Provisioned IOPS SSD.

*A company is building a Two-Tier web application to serve dynamic transaction-based content. The Data Tier uses an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalble Web Tier?* * Elastic Load Balancing, Amazon EC2, and Auto Scaling. * Elastic Load Balancing, Amazon RDS, with Multi-AZ, and Amazon S3. * Amazon RDS with Multi-AZ and Auto Scaling. * Amazon EC2, Amazon Dynamo DB, and Amazon S3.

*Elastic Load Balancing, Amazon EC2, and Auto Scaling.* The question mentions a scalable Web Tier and not a Database Tier. So, Option C, D, and B can be elminated since they are database related options.

*A company hosts data in S3. There is now a mandate that going forward, all data in S3 bucket needs to be encrypted at rest, how can this be achieved?* * Use AWS Access Keys to encrypt the data. * Use SSL Certificates to encrypt the data. * Enable Server-side encryption on the S3 bucket. * Enable MFA on the S3 bucket.

*Enable Server-side encryption on the S3 bucket.* Server-side encryption is about data encryption at rest-that is. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

*You have created a new AWS account for your company, and you have also configured multi-factor authentication on the root account. You are about to create your new users. What strategy should you consider in order to ensure that there is good security on this account.* *Restrict login to the corporate network only. * Give all users the same password so that if they forget their password they can just ask their co-workers. * Require users to only be able to log in using biometric authentication. * Enact a strong password policy: user passwords must be changed every 45 days, with each password containing a combination of capital letters, lower case letters, numbers, and special symbols.

*Enact a strong password policy: user passwords must be changed every 45 days, with each password containing a combination of capital letters, lower case letters, numbers, and special symbols.*

*You have created an AWS Lambda function that will write data to a DynamoDB table. Which of the following must be in place to ensure that Lambda function can interact with DynamoDB table?* * Ensure IAM Role is attached to the Lambda function which has the required DynamoDB privileges. * Ensure an IAM User is attached to the Lambda function which has the required DynamoDB privileges. * Ensure the Access keys are embedded in the AWS Lambda function. * Ensure the IAm user password is embedded in the AWS Lambda function.

*Ensure an IAM Role is attached to the Lambda function which has the required DynamoDB prvileges.* Each Lambda function has an IAM role (execution role) associated with it. You specify the IAM role when you create your Lambda function. Permissions you grant to this role determine what AWS Lambda can do when it assumes the role. There are two types of permissions that you grant to the IAM role: 1) If your Lambda function code accesses other AWS resources, such as read an object from an S3 bucket or write logs to CloudWatch logs, you need to grant permissions for relevant Amazon S3 and CloudWatch actions to the role. 2) If the event source is stream-based (Amazon Kinesis Data Streams and DynamoDB streams), AWS Lambda polls these streams on your behalf. AWS Lambda needs permission to poll the stream and read new records on the streams so you need to grant relevant permission to this role.

*A company has resources hosted in their AWS Account. There is a requirement to monitor API activity for all regions and the audit needs to be applied for future regions as well. Which of the following can be used to fulfill this requirement?* * Ensure CloudTrail for each region, then enable for each future region. * Ensure one CloudTrail trail is enabled for all regions. * Create a CloudTrail for each region. Use CloudFormation to enable the trail for all future regions. * Create a CloudTrail for each region. Use AWS Config to enable the trail for all future regions.

*Ensure one CloudTrail trail is enabled for all regions.* You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail without taking any action.

*To connect your corporate data center to AWS, you need at least which of the following components?* (Choose two.) * Internet gateway * Virtual private gateway * NAT gateway * Customer gateway

*Internet gateway* *NAT gateway* To connect to AWS from your data center, you need a customer gateway, which is the customer side of a connection, and a virtual private gateway, which is the AWS side of the connection. An Internet gateway is used to connect a VPC with the Internet, whereas a NAT gateway connects to the servers running in the private subnet in order to connect to the Internet.

*Where do you define the details of the type of servers to be launched when launching the servers using Auto Scaling?* * Auto Scaling group * Launch configuration * Elastic Load Balancer * Application load balancer

*Launch configuration* You define the type of servers to be launched in the launch configuration. The Auto Scaling group is used to define the scaling policies, Elastic Load Balancing is used to distribute the traffic across multiple instances, and the application load balancer is used to distribute the HTTP/HTTS traffic at OSI layer 7.

*An application currently uses AWS RDS MySQL as its data layer. Due to recent performance issues on the database, it has been decided to separate the querying part of the application by setting up a separate reporting layer. Which of the following additional steps could also potentially assist in improving the performance of the underlying database?* * Make use of Multi-AZ to setup a secondary database in another Availability Zone. * Make use of Multi-AZ to setup a secondary database in another region. * Make use of Read Replicas to setup a secondary read-only database. * Make use of Read Replicas to setup a secondary read and write database.

*Make use of Read Replicas to setup a secondary read-only database.* Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a single DB instance for read-heavy database workloads. You can create one or more replicas of the given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

*Can I delete a snapshot of an EBS Volume that is used as the root device of a registered AMI?* * Only using the AWS API. * Yes. * Only via the Command Line. * No.

*No*

*To save administration headaches, a consultant advises that you leave all security groups in web facing subnets open on port 22 to 0.0.0.0/0 CIDR. That way, you can connect wherever you are in the world. Is this a good security design?* * Yes * No

*No* 0.0.0.0/0 would allow ANYONE from ANYWHERE to connect to your instances. This is generally a bad plan. The phrase 'Web facing subnets..' does not mean just web servers. It would include any instances in that subnet some of which you may not strangers attacking. You would only allow 0.0.0.0/0 on port 80 or 443 to to connect to your public facing Web Servers, or preferably only to an ELB. Good security starts by limiting public access to only what the customer needs.

*Can you add an IAM role to an IAM group?* * Yes * No * Yes, if there are ten members in the group * Yes, if the group allows adding a role

*No* No, you can't add an IAM role to an IAM group.

*To help you manage your Amazon EC2 instances, you can assign your own metadata in the form of ________.* * Certificates * Notes * Wildcards * Tags

*Tags* Tagging is a key part of managing an environment. Even in a lab, it is easy to lose track of the purpose of a resources, and tricky determine why it was created and if it is still needed. This can rapidly translate into lost time and lost money.

*I can use the AWS Console to add a role to an EC2 instance after that instance has been created and powered-up.* * False * True

*True*

*How many copies of my data does RDS - Aurora store by default?* * 1 * 3 * 2 * 6

6

*What is the best way to manage RESTful APIs?* * API Gateway * EC2 servers * Lambda * AWS Batch

API Gateway * Theoretically EC2 servers can be used for managing the APIs, but if you can do it easily through API Gateway, why would you ever consider EC2 servers? Lambda and Batch are used for executing the code.

*Which of the following data formats does Amazon Athena support?* (Choose 3) * Apache ORC * JSON * XML * Apache Parquet

Apache ORC, JSON, Apache Parquet * Amazon Athena is an interactive query service that makes it easy to analyse data in Amazon S3, using standard SQL commands. It will work with a number of data formats including "JSON", "Apache Parquet", "Apache ORC" amongst others, but "XML" is not a format that is supported.

*AWS's NoSQL product offering is known as ________.* * MongoDB * RDS * MySQL * DynamoDB

DynamoDB

*In RDS, changes to the backup window take effect ________.* * The next day * You cannot back up in RDS. * After 30 minutes * Immediately

Immediately

*You can RDP or SSH in to an RDS instance to see what is going on with the operating system.* * True * False

False

*Which set of RDS database engines is currently available?* * MariaDB, SQL Server, MySQL, Cassandra * Oracle, SQL Server, MySQL, PostgreSQL * PostgreSQL, MariaDB, MongoDB, Aurora * Aurora, MySQL, SQL Server, Cassandra

Oracle, SQL Server, MySQL, PostgreSQL

*Which AWS DB platform is most suitable for OLTP?* * ElastiCache * DynamoDB * RDS * Redshift

RDS

Which of the following DynamoDB features are chargeable, when using a single region? (Choose 2) * Incoming Data Transfer * Read and Write Capacity * The number of tables created * Storage of Data

Read and Write Capacity, Storage of Data * There will always be a charge for provisioning read and write capacity and the storage of data within DynamoDB, therefore these two answers are correct. There is no charge for the transfer of data into DynamoDB, providing you stay within a single region (if you cross regions, you will be charged at both ends of the transfer.) There is no charge for the actual number of tables you can create in DynamoDB, providing the RCU and WCU are set to 0, however in practice you cannot set this to anything less than 1 so there always be a nominal fee associated with each table.

*Amazon's ElastiCache uses which two engines?* * Reddit & Memcrush * Redis & Memory * MyISAM & InnoDB * Redis & Memcached

Redis & Memcached

*When creating an RDS instance, you can select the Availability Zone into which you deploy it.* * False * True

True

*With new RDS DB instances, automated backups are enabled by default?* * True * False

True

*What AWS service can you use to manage multiple accounts?* * Use QuickSight * Use Organization * Use IAM * Use roles

Use Organization * QuickSight is used for visualization. * IAM can be leveraged within accounts, and roles are also within accounts.

*If you want to provision your infrastructure in a different region, what is the quickest way to mimic your current infrastructure in a different region?* * Use a CloudFormation template * Make a blueprint of the current infrastructure and provision the same manually in the other region * Use CodeDeploy to deploy the code to the new region * Use the VPC Wizard to lay down your infrastructure in a different region

Use a CloudFormation template * Creating a blueprint and working backward from there is going to be too much effort. Why you would do that when CloudFormation can do it for you? CodeDeploy is used for deploying code, and the VPC Wizard is used to create VPCs.

*A company wants to build a brand new application on the AWS Cloud. They want to ensure that this application follows the Microservices architecture. Which of the following services can be used to build this sort of architecture?* (Choose 3) * AWS Lambda * AWS ECS * AWS API Gateway * AWS SQS

*AWS Lambda* *AWS ECS* *AWS API Gateway* AWS Lambda is a serverless compute service that allows you to build independent services. The Elastic Container service (ECS) can be used to manage containers. The API Gateway is a serverless component for managing access to APIs.

*To audit the API calls made to your account, which AWS service should you be using?* * AWS Systems Manager * AWS Lambda * AWS CloudWatch * AWS CloudTrail

*AWS CloudTrail* AWS CloudTrail is used to audit the API calls. ----------------------------------- AWS Systems Manager gives you visibility and control of your infrastructure on AWS. AWS Lambda lets you run code without provisioning or managing servers. AWS CloudWatch is used to monitor the AWS resources such as EC2 servers; it provides various metrics in terms of CPU, memory, and so on.

*A company is planning on building an application using the services available on AWS. This application will be stateless in nature, and the service must have the ability to scale according to the demand. Which of the following would be an ideal compute service to use in this scenario?* * AWS DynamoDB * AWS Lambda * AWS S3 * AWS SQS

*AWS Lambda* A stateless application is an application that needs no knowledge of previous interactions and stores no session information. Such an example could be an application that, given the same input, provides the same response to any end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g. EC2 instances, AWS Lambda functions).

*An application with a 150 GB relational database runs on an EC2 instance. While the application is used infrequently with small peaks in the morning and evening, what is the MOST cost effective storage type among the options below?* * Amazon EBS provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD * Amazon EBS General Purpose SSD * Amazon EFS

* Amazon EBS General Purpose SSD* Since the database is used infrequently and not throughout the day, and the question mentioned the MOST cost effective storage type, the preferred choice would be EBS General Purpose SSD over EBS provisioned IOPS SSD. ----------------------------------- The minimum volume of Throughput Optimized HDD is 500GB. As per our scenario, we need 150 GB only. Hence, option C: Amazon EBS General Purpose SSD, would be the best choice for cost-effective. NOTE: SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dormant performance attribute is *IOPS*. The question is focusing on a relational DB where we will give importance to Input/output operations per second. Hence gp2 seems to be a good option in this case. Since the question does not mention on any mission-critical low-latency requirement PIOPS is not required. HDD-backed volumes are optimized for large streaming workloads where *throughput* (measured in MiB/s) is a better performance measure than IOPS.

*There is an application which consists of EC2 Instances behind a classic ELB. An EC2 proxy is used for content management to backend instances. The application might not be able to scale properly. Which of the following can be used to scale the proxy and backend instances appropriately?* (Choose 2) * Use Auto Scaling for the proxy servers. * Use Auto Scaling for backend instances. * Replace the Classic ELB with Application ELB. * Use Application ELB for both the frontend and backend instances.

* Use Auto Scaling for the proxy servers.* * Use Auto Scaling for the backend instances.* When you see the requirement for scaling, consider the Auto Scaling service provided by AWS. This can be used to scale both the backend instances and the EC2 servers.

*A development team wants to deploy a complete serverless application on the AWS Cloud. This application will be invoked by users across the globe. Which of the following services would be an ideal component in such an architecture?* (Choose 2) * AWS Lambda * API Gateway * AWS RDS * AWS EC2

*AWS Lambda* *API Gateway* AWS Lambda is the serverless compute component provided by AWS. One can easily place their running code on this service. And then, the API gateway can be used as an invocation point for the AWS Lambda function.

*When you create an EC2 instance, select Detailed Monitoring. At what frequency will the EBS volume automatically send metrics to Amazon CloudWatch?* * 1 * 3 * 10 * 15

*1* When Detailed Monitoring is enabled, the metrics are sent every minute. ----------------------------------- The metrics are sent every minute, so the other options are incorrect.

*What is the maximum size of the CIDR block you can have for a VPC?* * 16 * 32 * 28 * 10

*16* The maximum size of a VPC you can have is /16, which corresponds to 65,536 IP addresses.

*What is the minimum size of an General Purpose SSD EBS Volume?* * 1MB * 1GiB * 1byte * 1GB

*1GiB* SSD volumes must be between 1 GiB - 16 TiB.

*How quickly can objects be restored from Glacier?* * 2 hours * 1 hour * 3-5 hours * 30 minutes

*3-5 hours* You can expect most restore jobs initiated via the Amazon S3 APIs or Management Console to complete in 3-5 hours. Expedited restore is available at a price.

*How many IP addresses are reserved by AWS for internal purposes in a CIDR block that you can't use?* * 5 * 2 * 3 * 4

*5* AWS reserves five IP address for internal purposes, the first four and the last one.

*When you define a CIDR block with an IP address range, you can't use all the IP addresses for its own networking purposes. How many IP addresses does AWS reserve?* * 5 * 2 * 3 * 4

*5* AWS reserves the first four addresses and the last IP address for internal purposes. ----------------------------------- B, C and D are incorrect. AWS reserves the first four addresses and the last IP address of every subnet for internal purposes, so they can't be used by the customers.

*You have an application running in us-west-2 requiring 6 EC2 Instances running at all times. With 3 Availability Zones in the retion viz. us-west-2a, us-west-2b, and us-west-2c, which of the following deployments provides fault tolerance if any Availability Zone in us-west-2 becomes unavailable?* (Choose 2) * 2 EC2 Instances in us-west-2a, 2EC2 Instances in us-west-2b, and 2 EC2 Instances inus-west-2c * 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c * 4 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b, and 2 EC2 Instances in us-west-2c * 6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c * 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and 3 EC2 Instances in us-west-2c

*6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c* *3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and 3 EC2 Instances in us-west-2c* Option D- US West 2a-6, US West 2b-6, US West 2c-0 If US West 2a goes down we will still have 6 instances in US West 2b. If US West 2b goes down we will still have 6 instances running in US West 2a. If US West 2c goes down we will still have 6 instances running in US West 2a, 6 instances running in US West 2b Option E - US West 2a-3, US West 2b -3, US West 2c-3 If US West 2a goes down we will still have 3 instances running in US West 2b, 3 instances running in US West 2c If US West 2b goes down we will still have 3 instances running in US West 2a, 3 instances running in US West 2c If US West 2c goes down we will still have 3 instances running in US West 2a, 3 instances running in US West 2b ----------------------------------- Option A is incorrect because, even if one AZ becomes unavailable, you would only have 4 instances available. This does not meet the requirements. Option B is incorrect because, in the case of either us-west-2a or us-west-2b becoming unavailable, you would only have 3 instances available. Even this does not meet the specified requirements. Option C is incorrect because, if us-west-2a becomes unavailable, you would only have 4 instances available. This also does not meet the requirements. NOTE: In this scenario we need to have 6 instances running all the time even when 1 AZ is down.

*You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet-facing applications and internal-facting applications accessed only over VPN. Both Internet-facing and internal-facing applications must be able to leverage at least 3 AZs for high availability. How many subnets must you create within your VPC to accomodate these requirements?* * 2 * 3 * 4 * 6

*6* Since each subnet corresponds to one Availability Zone and you need 3 AZs for both the internet and intranet applications, you will need 6 subnets.

*For which of the following scenarios should a Solutions Architect consider using ElasticBeanStalk?* (Choose 2) * A Web application using Amazon RDS * An Enterprise Data Warehouse * A long running worker process * A Static website * A management task run once on nightly basis

*A Web application using Amazon RDS.* *A long running worker process.* AWS Documentation clearly mentions that the Elastic Beanstalk component can be used to create Web Server environments and Worker environments.

*You are trying to establish a VPC peering connection with another VPC, and you discover that there seem to be a lot of limitations and rules when it comes to VPC peering. Which of the following is not a VPC peering limitation or rule?* (Choose 2) * A cluster placement group cannot span peered VPCs * You cannot create a VPC peering connection between VPCs with matching or overlapping CIDR blocks. * You cannot have more than one VPC peering connection between the same VPCs at the same time. * You cannot create a VPC peering connection between VPCs in different regions.

*A cluster placement group cannot span peered VPCs.* *You cannot create a VPC peering connection between VPCs in different regions.* Cluster Placement Groups can span VPCs, but not AZs. In Jan 2018 AWS introduced inter-Region VPC Peering.

*You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?* * Multiple Amazon EBS Volume with snapshots. * A single Amazon Glacier Vault. * A single Amazon S3 bucket. * Multiple instance stores.

*A single Amazon S3 bucket.* Amazon S3 is the perfect storage solution for audio and text files. It is a highly available and durable storage device.

*You are a developer, and you want to create, publish, maintain, monitor, and secure APIs on a massive scale. You don't want to deal with the back-end resources. What service should you use to do this?* * API Gateway * AWS Lambda * AWS Config * AWS CloudTrail

*API Gateway* Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. ----------------------------------- AWS Lambda lets you run code without provisioning or managing servers. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resrouces. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. By using this service, you can record AWS Management Console actions and API Calls.

*You are deploying a new mobile application and want to deploy everything serverless since you don't want to manage any infrastructure. What services should you choose?* * API Gateway, AWS Lambda, and EBS volumes * API Gateway, AWS Lambda, and S3 * API Gateway, AWS Lambda, and EC2 instances * API Gateway, AWS Lambda, and EC2 instances, and EBS

*API Gateway, AWS Lambda, and S3* Open API Gateway, AWS Lambda, and S3 are serverless services. ----------------------------------- EBS volumes and EC2 instances need to be managed manually; therefore, they can't be called serverless.

*A company wants to have a fully managed data store in AWS. It should be a compatible MySQL database, which is an application requirement. Which of the following databases engines can be used for this purpose?* * AWS RDS * AWS Aurora * AWS DynamoDB * AWS Redshift

*AWS Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. *Note:* RDS is a generic service to provide Relational Database service which supports 6 databases engines. They are Aurora, MySQL, MariaDB, PostgreSQL, Oracle and Microsoft SQL server. Our question is to select MySQL compatible database from the options provided. Out of the options listed *Amazon Aurora* is a MySQL- and PostgreSQL-compatible enterprise-class database. Hence Option B is the answer.

*A database is required for a Two-Tier application. The data would go through multiple schema changes. The database needs to be durable, ACID compliant and changes to the database should not result in database downtime. Which of the following is the best option for data storage?* * AWS S3 * AWS Redshift * AWS DynamoDB * AWS Aurora

*AWS Aurora* As per AWS documentation Aurora does support Schema Changes. Amazon Aurora is a MySQL-compatible database that combines the speed and availability of high-end commercial databases with the simplicity and cost- effectiveness of open-source databases. Amazon Aurora has taken a common data definition language (DDL) statement that typically requires hours to complete in MySQL and made it near-instantaneous i.e. .15 sec for a 100 BG table on r3.8x large instance. *Note:* Amazon DynamoDB is schema-less, in that the data items in a table need not to have the same attributes or even the same number of attributes. Hence it is not a solution. In Aurora, when a user issues a DDL statement: The database updates the INFORMATION_SCHEMA system table with the new schema. In addition, the database timestamps the operation, records the old schema into a new system table (Schema Version Table), and propagates this change to read replicas.

*A company wants to create a standard template for deployment of their infrastructure. These would also be used to provision resources in another region during disaster recovery scenarios. Which AWS service can be used in this regard?* * Amazon Simple Workflow Service * AWS Elastic Beanstalk * AWS CloudFormation * AWS OpsWorks

*AWS CloudFormation* AWS CloudFormation gives developers and systems adminstrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fasion. You can use AWS CloudFormation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using using a drag-and-drop interfaces in the AWS CloudFormation Designer.

*You are using the Virginia region as your primary data center for hosting AWS services. You don't use any other region right now, but you want to keep yourself ready for any disaster. During a disaster, you should be able to deploy your infrastructure like your primary region in a matter of minutes. Which AWS service should you use?* * Amazon CloudWatch * Amazon EC2 AMIs with EBS snapshots * Amazon Elastic Beanstalk * AWS CloudFormation

*AWS CloudFormation* AWS CloudFormation gives you an easy way to capture your existing infrastructure as code and then deploy a replica of your existing architecture and infrastructure into a different region in a matter of minutes. ------------------------- Amazon CloudWatch is used for monitoring, and Amazon EC2 AMIs with EBS snapshots will deploy only the EC2 instance. But what about the other AWS services you have deployed? Amazon Elastic Beanstalk can't be used for the entire infrastructure in different regions.

*You work as an architect for a consulting company. The consulting company normally creates the same set of resources for their clients. They want the same way of building templates, which can be used to deploy the resources to the AWS accounts for the various clients. Which of the following service can help fulfill this requirement?* * AWS Elastic Beanstalk * AWS SQS * AWS CloudFormation * AWS SNS

*AWS CloudFormation* AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. ----------------------------------- Option A is invalid because this is good to get a certain set of defined resources up and running. But it cannot be used to duplicate infrastructure as code. Option B is invalid because this is the Simple Queue Service which is used for sending messages. Option D is invalid because this is the Simple Notification service that is used for sending notifications.

*A company has an entire infrastructure hosted on AWS. It wants to create code templates used to provision the same set of resources in another region in case of a disaster in the primary region. Which of the following services can help in this regard?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*AWS CloudFormation* AWS CloudFormation provisions your resources in a safe, repeatble manner, allow you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.

*A company is planning to adopt Insfrastructure as Code (IaC) since the priority from senior management is to achieve as much automation as required. Which of the following components would help them achieve this purpose?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*AWS CloudFormation* This supplements the requirement in the question by allowing consultants to use their architecture diagrams to construct cloudFormation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Service resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. All you have to do is create a template that describes all the AWS resources that you want (Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.

*Your company is planning on launching a set of EC2 Instances for hosting their production-based web application. As an architect you have to instruct the operations department on which service they can use for the monitoring purposes. Which of the following would you recommend?* * AWS CloudTrail * AWS CloudWatch * AWS SQS * AWS SNS

*AWS CloudWatch* Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers. ----------------------------------- Option A is incorrect since this is used for API monitoring. Option C is incorrect since this is used to working with messages in the queue. Option D is incorrect since this is used for sending notifications.

*You had a code deployment over the weekend, and somone pushed something into your instance. Now the web server is not accessible from the Internet. You are looking to find out what change has been made in the system, and you want to track it. Which AWS service are you going to use for this?* * AWS CloudTrail * AWS Config * AWS Lambda * AWS CloudWatch

*AWS Config* AWS Config continuously records configuration changes of AWS resources. ------------------------- CloudTrail is a service that logs all API calls, AWS Lambda lets you run code without provisioning or managing servers, and AWS CloudWatch is used to monitor cloud resources in general.

*Your company authenticates users in a very disconnected network requiring each user to have several username/password combinations for different applications. You've been tasked to consolidate and migrate services to the cloud and reduce the number of usernames and passwords employees need to use. What is your best recommendation?* * AWS Directory Services allows users to sign in with their existing corporate credentials - reducing the need for additional credentials. * Create two Active Directories - one for the cloud and one for on-premises - reducing username/password combinations to two. * Require users to use third-party identity providers to log-in for all services. * Build out Active Directory on EC2 instances to gain more control over user profiles.

*AWS Directory Service allows users to sign in with their existing corporate credentials - reducing the need for additional credentials.* AWS Directory Service enables your end users to use their existing corporate credentials when accessing AWS applications. Once you've been able to consolidate services to AWS, you won't have to create new credentials, you'll be able to let the users use their existing username/password. ----------------------------------- Option B is incorrect, One Active Directory can be used for both on-premises and the cloud; this isn't the best option provided. Option C is incorrect, this won't always reduce the number of username/passwords combniation. Option D is incorrect, this is more effort and requires additional management than using a managed service.

*A company wants to have a NoSQL database hosted on the AWS Cloud, but do not have the necessary staff to manage the underlying infrastructure. Which of the following choices would be ideal for this requirement?* * AWS Aurora * AWS RDS * AWS DynamoDB * AWS Redshift

*AWS DyanamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the adminstratrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisoning, setup and configuration, replication, software patching, or cluster scaling.

*Your company manages an application that currently allows users to upload images to an S3 bucket. These images are picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area where the metadata for these images can be stored. Which of the following wolud be an ideal data store for this?* * AWS Redshift * AWS Glacier * AWS DynamoDB * AWS SQS

*AWS DynamoDB* AWS DynamoDB is the best, light-weight and durable storage option for metadata. ----------------------------------- Option A is incorrect because, this is normall used for petabye based storage. Option B is incorrect because this is used for archive storage. Option D is incorrect because this is used for messaging purposes.

*A company wants to implement a data store AWS. The data store needs to have the following requirements.* *1) Completely managed by AWS* *2) Ability to store JSON objects efficiently* *3) Scale based on demand* *Which of the following would you use as the data store?* * AWS Redshift * AWS DynamoDB * AWS Aurora * AWS Glacier

*AWS DynamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. It is ideal for storing JSON based objects. ----------------------------------- Option A is incorrect since this is normally used to host a data warehousing solution. Option C is incorrect since this is used to host a MySQL database. Option D is incorrect since this is used for archive storage.

*An application sends images to S3. The metadata for these images needs to be saved in persistent storage and is required to be indexed. Which one of the following can be used for the underlying metadata storage?* * AWS Aurora * AWS S3 * AWS DynamoDB * AWS RDS

*AWS DynamoDB* The most efficient storage mechanism for just storing metadata is DynamoDB. DynamoDB is normally used in conjunction with the Simple Storage service. So, after storing the images in S3, you can store metadata in DynamoDB. You can also create secondary indexes for DynamoDB Tables.

*A company has a requirement to implement block level storage. Each storage device will store around 100 GB of data. Which of the following can be used to fulfill this requirement?* * AWS EBS Volumes * AWS S3 * AWS Glacier * AWS EFS

*AWS EBS Volumes* For block level storage, you need to consider EBS Volumes. ----------------------------------- Options B and C are incorrect since they provide object level storage. Option D is incorrect since this is file level storage.

*What options can be used to host an application that uses NGINX and is scalable at any point in time?* (Choose 2) * AWS EC2 * AWS Elastic Beanstalk * AWS SQS * AWS ELB

*AWS EC2* *AWS Elastic Beanstalk* NGINX is an open source software for web serving, reverse proxying, caching, load balancing etc. It complements the load balancing capabilities of Amazon ELB and ALB by adding support for multiple HTTP, HTTP/2, and SSL/TLS services, content-based routing rules, caching, Auto Scaling support, and traffic management policies. NGINX can be hosted on an EC2 instance through a series of clear steps- Launch an EC2 instance through the console. SSH into the instance and use the command yum install -y nginx to install nginx. Also make sure that it is configured to restart automatically after a reboot. It can also be installed with an Elastic Beanstalk service. To enable the NGINX proxy server with your Tomcat application, you must add a configuration file to .ebextensions in the application source bundle that you upload to Elastic Beanstalk.

*Your company is planning on developing a new application. Your development team needs a quick environment setup in AWS using NGINX as the underlying web server environment. Which of the following services can be used to quickly provision such an environment?* (Please select 2 correct options.) * AWS EC2 * AWS Elastic Beanstalk * AWS SQS * AWS ELB

*AWS EC2* *AWS Elastic Beanstalk* NGINX is an open source software for web serving, reverse proxying, caching, load balancing etc. It complements the load balancing capabilities of Amazon ELB and ALB by adding support for multiple HTTP, HTTP/2, and SSL/TLS services, content-based routing rules, caching, Auto Scaling support, and traffic management policies. NGINX can be hosted on an EC2 instance through a series of clear steps- Launch an EC2 instance through console, SSH into the instance and use the command yum install -y nginx to install nginx. Also, make sure that it is configured to restart automatically after a reboot. It can also be installed with an Elastic Beanstalk service. To enable the NGINX proxy server with your Tomcat application, you must add a configuration file to .ebextensions in the application source bundle that you upload to Elastic Beanstalk.

*Your company is big on building container-based applications. Currently they use Kubernetes for their on-premises docker based orchestration. They want to move to AWS and preferably not have to manage the infrastructure for the underlying orchestration service. Which of the followign could be used for this purpose?* * AWS DynamoDB * AWS ECS with Fargate * AWS EC2 with Kubernetes installed * AWS Elastic Beanstalk

*AWS ECS with Fargate* Amazon ECS has two modes: Fargate launch type and EC2 launch type. With Fargate launch type, all you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. ----------------------------------- Option A is incorrect since this is a fully managed NoSQL database. Option C is incorrect since this would add maintenance overhead for the company and the question mentions that the company does not want to manage infrastructure. Option D is incorrect since this is used to deploy applications but will provide a managed orchestration service.

*You are hosting your application on EC2 servers and using an EBS volume to store the data. Your security team needs all the data in the disk to be encrypted. What should you use to achieve this?* * IAM Access Key * AWS Certificate Manager * AWS KMS API * AWS STS

*AWS KMS API* AWS KMS helps to create and control encryption keys used to encrypt your data. ----------------------------------- IAM Access Key is used to secure access to EC2 servers, AWS Certificate Manager is used to generate SSL certificates, and Secure Token Service is used to generate temporary credentials.

*The security policy of an organization requires an application to encrypt data before writing to the disk. Which solution should the organization use to meet this requirement?* * AWS KMS API * AWS Certificate Manager * API Gateway with STS * IAM Access Key

*AWS KMS API* AWS Key Management Service (AWS KMS) is managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others will make it simple to encypt your data with encryption keys that you manage. ----------------------------------- Option B is incorrect - The AWS Certificate Manager can be used to generate SSL certificates to encrypt in transit, but not at rest. Option C is incorrect - It is used for issuing tokens while using the API gateway for traffic in transit. Option D is used for secure access to EC2 Instances.

*A company has a set of EC2 Instances that store critical data on EBS Volumes. The IT security team has now mandated that the data on the disks needs to be encrypted. Which of the following can be used to achieve this purpose?* * AWS KMS API * AWS Certificate Manager * API Gateway with STS * IAM Access Key

*AWS KMS API* AWS Key Management Service (AWS KMS) is managed service that makes it easy to create and control encryption keys used to encrypt your data. AWS KMS is integrated with other AWS service including Amazon Elastic Block Store (Amazon EB), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and other, to make it simple to encrypt your data with encryption keys that you manage. ------------------------------ Option B is incorrect. The AWS Certificate Manager can be used to generate SSL certificates to encrypt traffic in transit, but not at reast. Option C is incorrect. This is used for issuing tokens while using API gateway for traffic in transit. Option D is incorrect. This is used for secure access to EC2 Instances.

*You are starting a delivery service and planning to use 5,000 cars in the first phase. You are going to leverage an AWS IoT solution for collecting all the data from the cars. Which AWS service can you use to ingest all the data from IoT in real time?* * AWS Kinesis * AWS Lambda * AWS API Gateway * AWS OpsWorks

*AWS Kinesis* AWS Kinesis allows you to ingest the data in real time. ------------------------- AWS Lambda lets you run code without provisioning or managing servers. API Gateway is managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*You managed the IT users for a large organization that is moving many services to the AWS and you want a seamless way for your employees to log in and use the cloud services. You also want to use AWS Managed Microsoft AD and have been asked if users can access services in the on-premises environment. What do you respond with?* * AWS Manged Microsoft AD requires data synchronization and replication to work properly. * AWS Managed Microsoft AD can only be used for cloud or on-premises environments, not both. * AWS Managed Microsoft AD can be used as the Activity Directory over VPN or Direct Connect. * AWS Managed Microsoft AD is 100% the same as Active Directory running on separate EC2 instance.

*AWS Managed Microsoft AD can be used as the Active Directory over VPN or Direct Connect* Because you want to use AWS Managed Microsoft AD, you want to be certain that your users can use the AWS cloud resources as well as services in your on-premise environment. In order for your company to have connectivity for AWS services, once you implement VPN or Direct Connect your AWS Managed Microsoft AD can be used for both cloud services and on-premises services. ----------------------------------- Option A is incorrect, while data can be sychronized from on-premises to the cloud, it is not required. Option B is incorrect, AWS Managed Microsoft AD can be used for both, it is not one or the other. Option D is incorrect, AWS Managed Microsoft AD being a managed service limits some capabilities versus running Active Directoy by itself on EC2 instances.

*You have been hired as a consultant for a company to implement their CI/CD processes. They currently use an on-premises deployment of Chef for their configuration management on servers. You need to advise them on what they can use on AWS to leverage their existing capabilities. Which of the following service would you recommend?* * Amazon Simple Workflow Service * AWS Elastic Beanstalk * AWS CloudFormation * AWS OpsWorks

*AWS OpsWorks* AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Check and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks let you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. OpsWorks has three offering, AWS OpsWorks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks. All of the other options are incorrect since the only tool which works effectively with the Chef Configuration management tool is AWS OpsWorks.

*Your company has more than 50 accounts on AWS, and you are finding it extremely difficult to manage the cost for each account manually. You want to centralize the billing and cost management for all the accounts across the company. Which service would you choose for this?* * Trusted Advisor * AWS Organization * Billing Console * Centralized Billing

*AWS Organization* AWS Organization can help you manage all those accounts. ----------------------------------- Trusted advisor gives advice on how to control the cost. It does not help in central billing management. For each account there is a separate billing console; therefore, 50 accounts, there will be 50 billing consoles. How would you centralize that? There is no AWS service called Centralized Billing.

*You have several AWS accounts within your company. Every business unit has its own AWS account, and because of that, you are having extreme difficulty managing all of them. You want to centrally control the use of AWS services down to the API level across multiple accounts. Which AWS service can you use for this?* * AWS Centralized Billing * Trusted Advisor * AWS Organization * Amazon Quicksight

*AWS Organization* Using AWS Organization, you can centrally control the use of AWS services down to the API level across multiple accounts. ------------------------- There is no service known as AWS Centralized Billing. Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. Amazon Quicksight is the business analytics service of AWS.

*A company has a requirement to store 100TB of data to AWS. This data will be exported using AWS Snowball and needs to then reside in a database layer. The database should have the facility to be queried from a business intelligence application. Each item is roughly 500KB in size. Which of the following is an ideal storage mechanism for the underlying data layer?* * AWS DynamoDB * AWS Aurora * AWS RDS * AWS Redshift

*AWS Redshift* For this sheer data size, the ideal storage unit would be AWS Redshift. Amazon Redshift is a fully managed, petabtye-scale data warehouse service in the cloud. You can start with just a few hundred gigabyte of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After the provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today. ----------------------------------- Option A is incorrect because the maximum item size in DynamoDB is 400KB. Option B is incorrect because Aurora supports 64TB of data. Option C is incorrect because we can create MySQL, MariaDB, SQL Server, PostgreSQL, and Oracle RDS DB instances with up to 16 TB of storage.

*Below are the requirements for a data store in AWS:* *a) Fully managed* *b)Integration with existing business intelligence tools* *c) High concurrency workload that generally involves reading and writing all columns for a small number of records at a time* *Which of the following would be an ideal data store for the above requirements?* (Choose 2) * AWS Redshift * AWS DynamoDB * AWS Aurora * AWS S3

*AWS Redshift* *AWS DynamoDB* The question says: a) Fully Managed b) Integration with existing business intelligence tools Therefore AWS Redshift would suit this requirement. c) High concurrency workload that generally involves reading and writing all columns for a small number of records at a time Therefore AWS DynamoDB would suit this requirement ----------------------------------- Option C is incorrect, AWS Aurora is a database and is not suitable for reading and writing small number of records. Option D is incorrect, AWS S3 cannot be integrated with business intelligence tools.

*You maintain an application which needs to store files in a file system which has the ability to be mounted on various Linux EC2 Instances. Which of the following would be an ideal storage solution?* * Amazon EBS * Amazon S3 * Amazon EC2 Instance store * Amazon EFS

*Amazon EFS* Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and application running on multiple instances.

*There is a requirement for 500 messages to be sent and processed in order. Which service can be used in this regard?* * AWS SQS FIFO * AWS SNS * AWS Config * AWS ELB

*AWS SQS FIFO* Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queues. ----------------------------------- NOTE: Yes, SNS is used to send out the messages. SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients-publishers and subscribers-also referred to as producers and consumers. Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel. Subscribers (i.e. web servers, email addresses, Amazon SQS queue, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e. Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic. There is no such thing like maintain the messages order in SNS. In the question, it mentioned that "There is a requirement for 500 messages to be sent and *processed in order*". By SNS all messages will send at the same time to all the subscribers.

*A company has a set of Hyper-V and VMware virtual machines. They are now planning on migrating these instances to the AWS Cloud. Which of the following can be used to move these resources to the AWS Cloud?* * DB Migration utility * AWS Server Migration Service * Use AWS Migration Tools * Use AWS Config Tools

*AWS Server Migration Service* AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premsis workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replication of live server volumes, making it easier for you to coordinate large-scale server migration.

*You are designing the following application in AWS. Users will use the application to upload videos and images. The files will then be picked up by a worker process for further processing. Which of the below services should be used in the design of the application.* (Choose 2) * AWS Simple Storage Service for storing the videos and images * AWS Glacier for storing the videos and images * AWS SNS for distributed processing of messages by the worker process * AWS SQS for distributed processing of messages by the worker process

*AWS Simple Storage Service for storing the videos and images* *AWS SQS for distributed processing of messages by the worker proces.* Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. ----------------------------------- Option B is incorrect since this is used for archive storage. Option C is incorrect since this is used as a notification service.

*An application running on EC2 Instances processes sensitive information stored on Amazon S3. This information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 could be a security risk. Which solution will resolve the security concerns?* * Access the data through an Internet Gateway. * Access the data through a VPN connection. * Access the data through a NAT Gateway. * Access the data through a VPC endpoint for Amazon S3.

*Access the data through a VPC endpoint for Amazon S3.* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connection connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other service does not leave the Amazon network. ----------------------------------- Option A is incorrect, an Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. Option B is incorrect, a VPN, or Virtual Private Network, allows you to create a secure connection to another network over the Internet. Option C is incorrect, you can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the internet from initiating a connection with those instances.

*An IAM policy contains which of the following?* (Choose two.) * Username * Action * Service name * AZ

*Action* *Service name* A policy is not location specific and is not limited to a user.

*If an administrator who has root access leaves the company, what should you do to protect your account?* (Choose two.) * Add MFA to root * Delete all the IAM accounts * Change the passwords for all the IAM accounts and rotate keys * Delete all the EC2 instances created by the administrator

*Add MFA to root* *Change the passwords for all the IAM accounts and rotate keys* Deleting all the IAM accounts is going to be a bigger painful task. You are going to lose all the users. Similarly, you can't delete all the EC2 instances; they must be running some critical application or something meaningful.

*Which AWS service allows you to manage Docker containers on a cluster of Amazon EC2 servers?* * Amazon Elastic Container Service * Amazon Elastic Docker Service * Amazon Elastic Container Service for Kubernetes * Amazon Elastic Beanstalk

*Amazon Elastic Container Service* Amazon Elastic Container Service (ECS) is a container management service that allows you to manage Docker containers on a cluster of Amazon EC2 servers. ----------------------------------- There is no service called Amazon Elastic Docker Service. Amazon Elastic Container Service for Kubernetes is used for deploying Kubernetes. Amazon Elastic Beanstalk is used for deploying web applications.

*Your Operations department is using an incident based application hosted on a set of EC2 Instances. These instances are placed behind an AutoScaling Group to ensure the right number of instances are in place to support the application. The Operations department has expressed dissatisfaction with regard to poor application performance at 9:00 AM each day. However, it is also noted that the system performance returns to optimal at 9:45 AM* *What can be done to ensure that this issue gets fixed?* * Create another Dynamic Scaling Policy to ensure that the scaling happens at 9:00 AM. * Add another Auto Scaling group to support the current one. * Change the Cool Down Timers for the existing Auto Scaling Group. * Add a Scheduled Scaling Policy at 8:30 AM.

*Add a Scheduled Scaling Policy at 8:30 AM* Scheduled Scaling can be used to ensure that the capacity is peaked before 9:00 AM each day. Scaling based on schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.

*You have created an S3 bucket for your application and immediately receive more than 10,000 PUT per second. What should you do to ensure optimal performance?* * There is no need to do anything; S3 will automatically handle this. * Create each file in a separate folder. * Use S3 infrequent access. * Add a random prefix to the key names.

*Add a random prefix to the key names.* Also, when you are "putting" a large number of files, upload should be optimized with multipart uploads where on the sending side the original file is split into multiple parts, uploaded in parallel, and on the receiving side the file is composed back to a single object. ----------------------------------- A, B, and C are incorrect. By doing some optimization, if you can get better performance, then why not do it? If you create a separate folder for each file, it will be management nightmare. S3 IA won't give you better performance.

*After setting up a VPC peering connection between VPC and that of your client, the client requests to be able to send traffic between instances in the peered VPCs using private IP addresses. What must you do to make this possible?* * Establish a private peering connection * Add your instance and the client's instance to a Placement Group * Add a route to a Route Table that's associated with your VPC. * Use an IPSec tunnel.

*Add a route to a Route Table that's associated with your VPC.* If a route is added to your Route Table, your client will have access to your instance via private IP address.

*An application consists of the following architecture:* *a) EC2 Instances in a single AZ behind an ELB* *b) A NAT Instance which is used to ensure that instances can download updates from the Internet* *Which of the following can be used to ensure better fault tolerance in this setup?* (Choose 2) * Add more instances in the existing Availability Zone. * Add an Auto Scaling Group to the setup * Add more instances in another Availability Zone. * Add another ELB for more fault tolerance.

*Add an Auto Scaling Group to the setup.* *Add more instances in another Availability Zone.* Adding Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Auto Scaling, your applications gain the following benefits: Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Auto Scaling to use multiple Availability Zones. If one Availability Zones becomes unavailable, Auto Scaling can launch instances in another one to be compensate. Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current traffic demands.

*An infrastructure is being hosted in AWS using the following resources:* *a) A couple of EC2 instances serving a Web-Based application* *b) An Elastic Load Balancer in front of the EC2 instances.* *c) An AWS RDS which has Multi-AZ enabled* *Which of the following can be added to the setup to ensure scalability?* * Add another ELB to the setup. * Add more EC2 Instances to the setup. * Enable Read Replicas for the AWS RDS. * Add an Auto Scaling Group to the setup.

*Add an Auto Scaling Group to the setup.* AWS Auto Scaling enables you to configure automatic scaling for the scalable AWS resources for you application in a matter of minutes. AWS Auto Scaling uses the Auto Scaling and Application Auto Scaling services to configure scaling policies for your scalable AWS resources.

*The application workload changes constantly, and to meet that, you keep on changing the hardware type for the application server. Because of this, you constantly need to update the web server with the new IP address. How can you fix this problem?* * Add a load balancer * Add an IPv6 IP address * Add an EIP to it * Use a reserved EC2 instance

*Add an EIP to it* Even if you reserve the instance, you still need to remap the IP address. Even with IPv6 you need to remap the IP addresses. The load balancer won't help because the load balancer also needs to be remapped with the new IP addresses.

*You are working as an AWS Architecture for a start-up company. They have a production website which is two-tier with web servers in front end & database server in back end. All these database servers are spread across multiple Availability Zones & are stateful instance. You have configured Auto Scaling Group for these servers with minimum of 2 instance & maximum of 6 instance. During scale in of these instances post peak hours, you are observing data loss from these database servers. What feature needs to be configured additionally to avoid data loss & copy data before instance termination?* * Modify cooldown period to complete custom actions before Instance termination. * Add lifecycle hooks to Auto scaling group. * Customise Termination policy to complete data copy before termination. * Suspend Termination process which will avoid data loss.

*Add lifecycle hooks to Auto scaling group.* Adding Lifecycle Hooks to Auto Scaling group instance into wait state before termination. During this wait state, you can perform custom activities to retrieve critical operational data from stateful instance. Default Wait period is 1 hour. ----------------------------------- Option A is incorrect as cooldown period will not help to copy data from instance before termination. Option C is incorrect as Termination policy is used specify which instances to terminate first during scale in, configuring termination policy for the Auto Scaling group will not copy data before instance termination. Option D is incorrect as Suspending Terminate policy will not prevent data loss but will disrupts other process & prevent scale in.

*You currently have the following architecture in AWS:* *a. A couple of EC2 instances located in us-west-2a* *b. The EC2 Instances are launched via an Auto Scaling group.* *c. The EC2 Instances sit behind a Classic ELB.* *Which of the following additional steps should be taken to ensure the above architecture conforms to a well-architected framework?* * Convert the Classic ELB to an Application ELB. * Add an additional Auto Scaling Group. * Add additional EC2 Instances to us-west-2a. * Add or spread existing instances across multiple Availability Zones.

*Add or spread existing instances across multiple Availability Zones.* Balancing resources across Availability Zones is a best practice for well architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet.

*You have the following architecture deployed in AWS:* *a) A set of EC2 Instances which sits behind an ELB* *b) A database hosted in AWS RDS* *Of late, the performance on the database has been slacking due to a high number of read requests. Which of the following can be added to the architecture to alleviate the performance issue?* (Choose 2) * Add read replica to the primary database to offload read traffic. * Use ElastiCache in front of the databse. * Use AWS CloudFront in front of the database. * Use DynamoDB to offload all the reads. Populate the common items in a separate table.

*Add read replica to the primary database to offload read traffic.* *Use ElastiCache in front of the database.* AWS says "Amazon RDS Read Replicas provide enhanced performance and durability for the database (DB) instances. *This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instances for read-heavy database workloads.* You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increase aggregate read throughput. Amazon ElastiCache is an in-memory cache which can be used to cache common read requests. ----------------------------------- Option C is incorrect because, CloudFront is a valuable component of scaling a website, especially for geo-location workloads and queries. And more advanced for given architecture. Option D is incorrect, it will have latency and additional changes as well.

*You're an architect for the company. Your IT admin staff needs access to newly created EC2 Instances for administrative purposes. Which of the following needs to be done to ensure that the IT admin staff can successfully connect via port 22 on to the EC2 Instances.* * Adjust Security Group to permit egress traffic over TCP port 443 from your IP. * Configure the IAM role to permit changes to security group settings. * Modify the instance security group to allow ingress of ICMP packets from your IP. * Adjust the instance's Security Group to permit ingress traffic over port 22. * Apply the most recently released Operating System security patches.

*Adjust the instance's Security Group to permit ingress traffic over port 22.* A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. For connecting via SSH on EC2, you need to ensure that port 22 is open on the security group for EC2 instance. ----------------------------------- Option A is wrong, because port 443 is for HTTPS and not for SSH. Option B is wrong because IAM role is not pertinent to security groups. Option C is wrong because this is relevant to ICMP and not SSH. Option E is wrong because it does not matter what patches are there on the system.

*In AWS Route 53, which of the following are true?* * Alias Records provide a Route 53-specific extension to DNS functionality. * R53 Alias Records allow fast response to AWS initiated environmental changes. * Route53 allows you to create a CNAME at the top node of a DNS namespace (zone apex) * A CNAME record assigns an Alias name to an IP address. * A CNAME record assigns an Alias name to a Canonical Name. * Alias Records can point at any resource with a Canonical Name.

*Alias Records provide a Route 53-specific extension to DNS functionality.* *A CNAME record assigns an Alias name to a Canonical Name.* This is normal DNS capabilities as defined in the RFCs. By design CNAMEs are not intended to point at IP addresses.

*If you encrypt a database running in RDS, what objects are going to be encrypted?* * The entire database * The database backups and snapshot * The database log files * All of the above

*All of the above* When you encrypted a database, everything gets encrypted including the database, backups, logs, read replicas, snapshot, and so on.

*You work as an architect for a company. An application is going to be deployed on a set of EC2 instances in a VPC. The Instances will be hosting a web application. You need to design the security group to ensure that users have the ability to connect from the Internet via HTTPS. Which of the following needs to be configured for the security group.?* * Allow Inbound access on port 443 for 0.0.0.0/0 * Allow Outbound access on port 443 for 0.0.0.0/0 * Allow Inbound access on port 80 for 0.0.0.0/0 * Allow Outbound access on port 80 for 0.0.0.0/0

*Allow Inbound access on port 443 for 0.0.0.0/0* A *security group* acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. for the VPC. ----------------------------------- Option B is incorrect since security groups are stateful, you don't need to define the rule for outbound traffic. Option C and D are incorrect since you need to only ensure access for HTTPS, hence you should not configure rules for port 80.

*You are running a highly available application in AWS. The business needs a very performant shared file system that can be shared across EC2 servers (web servers). Which AWS service can solve this problem?* * Amazon EFS * Amazon EBS * Amazon EC2 instance store * Amazon S3

*Amazon EFS* Only EFS can be mounted across several EC2 instances at the same time. It provides the shared file system capability. ------------------------- EBS can be mounted with one EC2 instance at any point in time, and it can't be mounted with multiple EC2 instances. An EC2 instance store is the local storage within the EC2 server, which is also known as ephemeral storage. This can't be mounted to any other EC2 server. Amazon S3 is an object store and not a file system and can't be mounted to an EC2 server as a file system.

*You have created a VPC with a CIDR block of 200.0.0.0/16. You launched an EC2 instance in the public subnet, and you are hosting your web site from that EC2. You have already configured the security groups correctly. What do you need to do from network ACLs so that the web site is accessible from your home network of 192.168.1.0/24?* * Allow inbound traffic from source 192.168.1.0/24 on port 443. * Allow inbound traffic from 192.168.1.0/24 on port 80 and outbound traffic to destination 192.168.1.0/24 on an ephemeral port. * Allow inbound traffic from 192.168.1.0/24 on port 443 and outbound traffic to destination 192.168.1.0/24 on 443. * Allow inbound traffic from 192.168.1.0/24 on port 80.

*Allow inbound traffic from 192.168.1.0/24 on port 80 and outbound traffic to destination 192.168.1.0/24 on an ephemeral port.* You need to allow both inbound and outbound traffic from and to your network. Since you need to access the web site from your home, you need to provide access to port 80. The web site can in return request any available user port at the home network; thus, you need to allow an ephemeral port. ----------------------------------- A and D are incorrect because they do not have an option for outbound traffic. C is incorrect because the outbound traffic is restricted to port 443.

* You have created a VPC using the VPC wizard with a CIDR block of 100.0.0.0/16. You selected a private subnet and a VPN connection using the VPC wizard and launched an EC2 instance in the private subnet. Now you need to connect to the EC2 instance via SSH. What do you need to connect to the EC2 instance?* * Allow inbound traffic on port 22 on your network. * Allow inbound traffic on ports 80 and 22 to the private subnet. * Connect to the instance on a private subnet using NAT instance. * Create a public subnet and from there connect to the EC2 instance.

*Allow inbound traffic on port 22 on your network.* SSH runs on port 22. There you need to allow inbound access to port 22. ----------------------------------- Since you have already created a VPN while creating the VPC, the VPC is already connected with your network. Therefore, you can reach the private subnet directly from your network. The port at which SSH runs is 22, so you need to provide access to port 22.

*A company is migrating an on-premises 5 TB MySQL database to AWS and expects its database size to increase steadily. Which Amazon RDS engine meets these requirements?* * MySQL * Microsoft SQL Server * Oracle * Amazon Aurora

*Amazon Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed an reliability of high-end commercial databases with the simplicy and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag-usually much lesser than 100 milliseconds after the primary instance has written an update. *Note:* On a MySQL DB instance, avoid tables in your database growing too large. Provisioned storage limits restrict the maximum size of a MySQL table file to 16 TB. However, based on database usage, your Amazon Aurora storage will automatically grow, from the minimum of 10 GB up to 64 TB, in 10 GB increments, with no impact on database performance.

*A company is migrating an on-premise 10 TB MySQL database. With a business requirement that the replica lag be under 100 milliseconds, the company expects this database to quadruple in size. Which Amazon RDS engine meets the above requirements?* * MySQL * Microsoft SQL Server * Oracle * Amazon Aurora

*Amazon Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag-usually much less than 100 milliseconds after the primary instance has written an update. The company expects the database to quadruple in size and the business requirement is that replica lag must be kept under 100 milliseconds. Aurora Cluster can grow up to 64 TB in size and replica lag-is less than 100 milliseconds after the primary instance has written an update.

*You are planning to migrate your own on-premise MySQL database to AWS. The size of the database is 12TB. You are expecting that the database size will become three times that by the end of the year. Your business is also looking for a replica lag of less than 120 milliseconds. Which Amazon RDS engine meets these requirements?* * MySQL * Oracle * Amazon Aurora * Microsoft SQL Server

*Amazon Aurora* Amazon Aurora offers the least replication lag since since six copies of the data are mirrored across three AZs. Moreover, Aurora has the largest size (64TB) compared to any other RDS engine. ------------------------- MySQL won't be able to support the 36TB database size in RDS. Oracle and SQL Server don't offer read replica.

*You are planning to run a mission-critical online ordering-processing system on AWS, and to run that application, you need a database. The database must be highly available, with high performing, and you can't lose any data. Which database meets these criteria? * Use an Oracle database hosted in EC2 * Use Amazon Aurora * Use Redshift * Use RDS MySQL

*Amazon Aurora* Amazon Aurora stores six copies of the data across three AZs. It provides five times more performance than RDS MySQL. ------------------------- If you host an oracle database on EC2 servers, it will be much more expensive compared to Amazon Aurora, and you need to manage it manually. Amazon Redshift is a solution for a data warehouse.

*You want to deploy an application in AWS. The Application comes with an Oracle database that needs to be installed on a separate server. The application requires that certain files be installed on the database server. You are also looking at faster performance for the database. What solution should you choose to deploy the database?* * Amazon RDS for Oracle * Amazon EC2 with magnetic EBS volumes * Amazon EC2 with SSD-based EBS volumes * Migrate Oracle Database to DynamoDB

*Amazon EC2 with SSD-based EBS volumes* RDS does not provide operating system access; therefore, with RDS, you won't be able to install application-specific files on RDS instance. It has to be EC2. Since you are looking for faster performance and SSD will provide better performance than magnetic volumes, you need to choose EC2 and SSD. ------------------------- Since you need the database to be fast, a magnetic-based EBS volume won't provide the performance you are looking for. Oracle Database is a relational database, whereas DynamoDB is a NoSQL database is going to be a Herculean task. In some cases, it may even be possible; you may have to rewrite the entire application.

*An application requires a highly available relational database with an initial storage capacity of 8 TB. This database will grow by 8GB everyday. To support the expected traffic, at least eight read replicas will be required to handle the database reads. Which of the below options meet these requirements?* * DynamoDB * Amazon S3 * Amazon Aurora * Amazon Redshift

*Amazon Aurora* Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster. As a result, all Aurora Replicas return the same data for query results with minimal replica lag-usually much less than 100 milliseconds after the primary instance has written an update. Replica lag varies depending on the rate of database change. That is, during periods where a large amount of write operations occur for the database, you might see an increase in replica lag. Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, minimal additional work is required to replicate a copy of the data for each Aurora Replica. To increase availability, you can use Aurora Replicas as failover targets. Tha is, if the primary instance fails, an Aurora Replica is promoted to the primary instance. There is a brief interruption during which read and write requests made to the primary instance fail with an exception, and the Aurora Replicas are rebooted. If your Aurora DB cluster doesn't include any Aurora Replicas, then your DB cluster will be unavailable for the duration it takes your DB instance to recover from the failure event. However, promoting an Aurora Replica is much faster than recreating the primary instance. For high-availability scenarios, we recommend that you create one or more Aurora Replicas. These should be the same DB instance class as the primary instance and in different Availability Zones for your Aurora DB cluster. *Note:* You can't create an encrypted Aurora Replica for an unencrypted Aurora DB cluster. You can't create an unencrypted Aurora Replica for an encrypted Aurora DB cluster. For details on how to create an Aurora Replica, see Adding Aurora Replicas to a DB cluster. *Replication with Aurora MySQL* In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL: Two Aurora MySQL DB clusters in different AWS Regions, by creating an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region. Two Aurora MySQL DB cluster in the same region, by using MySQL binary log (binlog) replication. An Amazon RDS MySQL DB instance as the master and an Aurora MySQL DB cluster, by creating an Aurora Read Replica of an Amazon RDS MySQL DB Instance. Typically, this approach is used for migration to Aurora MySQL, rather than for ongoing replication.

*You are responsible for deploying a critical application to AWS. It is required to ensure that the control for this application meet PCI compliance. Also, there is a need to monitor web application logs to identify activity. Which of the following services can be used to fulfill this requirement?* (Choose 2) * Amazon CloudWatch Logs * Amazon VPC Flow Logs * Amazon AWS Config * Amazon CloudTrail

*Amazon CloudWatch Logs* *Amazon CloudTrail* AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS Account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.

*There is an urgent requirement to monitor some database metrics for a database hosted on AWS and send notifications. Which AWS services can be accomplish this?* (Choose 2) * Amazon Simple Email Service * Amazon CloudWatch * Amazon Simple Queue Service * Amazon Route53 * Amazon Simple Notification Service

*Amazon CloudWatch* *Amazon Simple Notification Service* Amazon CloudWatch will be used to monitor the IOPS metrics from the RDS Instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.

*I want to store JSON objects. Which database should I choose?* * Amazon Aurora for MySQL * Oracle hosted on EC2 * Amazon Aurora for PostgreSQL * Amazon DynamoDB

*Amazon DynamoDB* A JSON object needs to be stored in a NoSQL database. Amazon Aurora for MySQL and PostgreSQL and Oracle are relational databases.

*An application with a 150 GB relational database runs on an EC2 Instance. This application will be used frequently with a high database read and write requests. What is the most cost-effective storage type for this application?* * Amazon EBS Provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD * Amazon EBS General Purpose SSD * Amazon EFS

*Amazon EBS Provisioned IOPS SSD* The question is focusing on the most cost effective storage option for the application. Provisioned IOPS (SSD) are used for applications that require high Inputs/Outputs Operations per sec and is mainly used in large databases such as Mongo, Cassandra, Microsoft SQL Server, MySQL, PostgreSQL, Oracle. Throughput optimized HDD although it is cheaper compared to PIOPS, is used for data warehouse where it is designed to work with throughput intensive workloads such as big data, log processing etc. Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency. *Note:* As a Solutions Architect, we need to understand the nature of the application and requirement. The question also says that "An Application with a 150 GB relational database runs on an EC2 Instance. This application will be used frequently with a lot of database reads and writes." It also requires high reads and writes, in order to satisfy the application need, we need to go to the Provisioned IOPS. The question also states that the application will be frequently used for heavy read and write operations. So in that case General Purpose SSD won't be able to handle that workload. Hence Option A seems to be the right choice.

*You have an application for which you are thinking of using EC2 to host the Oracle database. The size of the database is 100GB. Since the application needs operating system access in the database tier, you can't use RDS. The application will be used infrequently, though sometimes it will be used in the morning and during the evening. What is the most cost-effective way to design the storage layer?* * Amazon S3 * Amazon EBS General Purpose SSD * Amazon EBS Provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD

*Amazon EBS Throughput Optimized HDD* Since the application will be used infrequently and the goal is to select a cost-optimized storage option, Throughput Optimized HDD is the best choice. ----------------------------------- A, B and C are incorrect. Amazon S3 is an object store, so it can't be used to host a database. General Purpose SSD and Provisioned IOPS SSD are going to cost a lot more than the Throughput Optimized HDD.

*In which of the following services can you have root-level access to the operating system?* (Choose two.) * Amazon EC2 * Amazon RDS * Amazon EMR * Amazon DynamoDB

*Amazon EC2* *Amazon EMR* Only Amazon EC2 and EMR provide root-level access. ----------------------------------- Amazon EMR and Amazon DynamoDB are managed services where you don't have root-level access.

*You are deploying an application to track the GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?* * Amazon Kinesis * AWS Data Pipeline * Amazon AppStream * Amazon Simple Queue Service

*Amazon Kinesis* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before processing can begin.

*Which EC2 offering provides the information needed to launch an instance?* * Spot Fleet * Amazon Machine Image * CloudWatch * Bootstrap Action

*Amazon Machine Image* An AMI provides all the information required to launch an instance. You need to specify an AMI when launching an instance, and you can launch as many instances as you need from the AMI. ------------------------- Using Spot Fleet, you can launch multiple EC2 spot instances, but internally they also need an AMI. CloudWatch can also create the EC2 instances for you, it also needs an AMI. Bootstrap Action is used while launching the instance.

*You have an RDS database that has moderate I/O requirements. Which storage medium would be best to accommodate these requirements?* * Amazon RDS Magnetic Storage * Amazon RDS Cold Storage * Amazon RDS Elastic Storage * Amazon RDS General Purpose (SSD) Storage

*Amazon RDS General Purpose (SSD) Storage* Amazon RDS General Purpose (SSD) Storage would be the most suitable. It offers cost-effective storage that is ideal for a broad range of workloads.

*You are designing a highly scalable and available web application, and you are using EC2 instances along with Auto Scaling to host the web application. You want to store the session state data in such a way that it should not impact Auto Scaling. What service could you use to store the session state data* (Choose three.) * Amazon EC2 * Amazon RDS * Amazon ElastiCache * Amazon DynamoDB

*Amazon RDS* *Amazon ElastiCache* *Amazon DynamoDB* You can choose an Amazon RDS instance, Amazon DynamoDB, or Amazon ElastiCache to store the session state data. ------------------------- If you store the session information in EC2 servers, then you need to wait for all the users to log off before shutting down the instance. In that case, Auto Scaling won't be able to terminate the instance even if only one user is connected. You can use RDS or Dynamo DB or ElastiCache to store the session state information. If you do so, EC2 can be started and stopped by Auto Scaling as per the scaling policies.

*A company is generating large datasets with millions of rows to be summarized column-wise. To build daily reports from these data sets, Business Intelligence tools would be used. Which storage service meets these requirements?* * Amazon Redshift * Amazon RDS * ElastiCache * DynamoDB

*Amazon Redshift* Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with a just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. Columnar storage for database tables is an important factor is optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Amazon Redshift use a block size of 1 MB, which is more efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of the query execution.

*I have to run my analytics, and to optimize I want to store all the data in columnar format. Which database serves my need?* * Amazon Aurora for MySQL * Amazon Redshift * Amazon DynamoDB * Amazon Aurora for Postgres

*Amazon Redshift* Amazon Redshift stores all the data in columnar format. Amazon Aurora for MySQL and PostgreSQL store the database in row format, and Amazon DynamoDB is a NoSQL database.

*A company has an aplication that stores images and thumbnails on S3. The thumbnail needs to be available for download immediately. Additionally, both the images and thumbnail images are not accessed frequently. Which is the most cost-efficient storage option that meets above-mentioned requirements?* * Amazon Glacier with Expedited Retrievals. * Amazon S3 Standard Infrequent Access. * Amazon EFS. * Amazon S3 Standard.

*Amazon S3 Standard Infrequent Access.* Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed. It is more cost effective than Option D (Amazon S3 Standard). If you choose Amazon Glacier with Expedited Retrievals, you defeat the whole purpose of the requirement, because of its increased cost.

*Which two features of Amazon S3 and Glacier provide unmatched durability, availability, and scalability? (Choose two.) * Amazon S3 Standard, Amazon S3 Standard IA, Amazon S3 One Zone IA, and Glacier automatically store objects across a minimum of three AZs. * Amazon S3 Standard, Amazon S3 Standard IA, and Glacier automatically store objects across minimum of three AZs. *Amazon S3 automatically replicates the content of the bucket to a second region for HA. * Amazon S3's cross-region replication automatically replicates every object uploaded to Amazon S3 to a bucket in a different region.

*Amazon S3 Standard, Amazon S3 Standard IA, and Glacier automatically store objects across minimum of three AZs.* Amazon S3's cross-region replication automatically replicates every object uploaded to Amazon S3 to a bucket in a different region. Amazon S3 Standard, Amazon S3 Standard IA, and GLacier automatically store objects across a minimum of three AZs, and by using the cross-region replication feature, you can automatically replicate any object uploaded to a bucket to a different region. ----------------------------------- A and C are incorrect. S3 One Zone IA replicates the data within one AZ but not outside that AZ. By default data never leaves a region. This is a fundamental principle with any AWS service. The customer needs to choose when and how to move the data outside a region.

*You are designing a storage solution for a publishing company. The publishing company stores lots of documents in Microsoft Word and in PDF format. The solution you are designing should be able to provide document-sharing capabilities so that anyone who is authorized can access the file and versioning capability so that the application can maintain several versions of the same file at any point in time. Which AWS service meets this requirement?* * Amazon S3 * Amazon EFS * Amazon EBS * Amazon RDS

*Amazon S3* Amazon S3 provides sharing as well as version control capabilities. ------------------------- Amazon EFS and EBS are not correct answer because if you decide the store the documents in EFS or EBS, your cost will go up, and you need to write an application on top that is going to provide versioning and sharing capabilities. Though EBS is a shared file system, in this case you are looking at the sharing capabilitiy of the application and not of the file system. Amazon RDS is a relational database offering, and storing documents in a relation database is not the right solution.

*You are building a custom application to process the data for your business needs. You have chosen Kinesis Data Streams to ingest the data. What are the various destinations where your data can go?* (Choose 3) * Amazon S3 * Amazon Elastic File System * Amazon EMR * Amazond Redshift * Amazon Glacier

*Amazon S3* *Amazon EMR* *Amazon Redshift* The destinations for Kinesis Data Streams are services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift Amazon EMR, and AWS Lambda. ------------------------- Amazon EFS is not a Kinesis Data Streams destination, and you can't send files to Amazon Glacier directly.

*You are developing a small application via which users can register for events. As part of the registration process, you want to send a one-time text message confirming registration. Which AWS service should you be using to do this?* * Amazon SNS * Amazon SQS * AWS STS * API Gateway

*Amazon SNS* You can use Amazon SNS to send text messages, or SMS messages, to SMS-enabled devices. You can send a message directly to a phone number. You can also send send a message to multiple phone numbers at the same time by subscribing those phone numbers to a topic and sending your message to the topic. ----------------------------------- Amazon SQS is a fully managed message queuing service. The AWS Security Token Service (STS) is a web service that enables you to request temporary limited-privilege credentials for AWS IAM users or for users who you authenticate via federation. API Gateway is a managed service that is used to create APIs.

*Your company has an application that takes care of uploading, processing and publishing videos posted by users. The current architecture for this application includes the following:* *a) A set of EC2 Instances to transfer user uploaded videos to S3 buckets.* *b) A set of EC2 worker processes to process and publish the videos.* *c) An Auto Scaling Group for the EC2 worker processes* *Which of the following can be added to the architecture to make it more reliable?* * Amazon SQS * Amazon SNS * Amazon CloudFront * Amazon SES

*Amazon SQS* Amazon SQS is used to decouple systems. It can store requests to process videos to be picked up by the worker processes. Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

*In Amazon RDS, who is responsible for patching the database?* * Customer. * Amazon. * In RDS you don't have to patch the database. * RDS does not come under the shared security model.

*Amazon.* RDS does come under a shared security model. Since it is a managed service, the patching of the database is taken care of by Amazon.

*You have suggested moving your company's web servers to AWS, but your supervisor is concerned about cost. Which of the following deployments will give you the most scalable and cost-effective solution?* * A hybrid solution that leverages on-premise resources * An EC2 auto-scaling group that will expand and contract with demand * A solution that's built to run 24/7 at 100% capacity, using a fixed number of T2 Micro instances * None of these options

*An EC2 auto-scaling group that will expand contract with demand.* An Auto-Scaling group of EC2 instances will exactly match the demand placed on your servers, allowing you to pay only for the compute capacity you actually need.

*Your company is designing an app that requires it to store data in DynamoDB and have registered the app with identity providers so users can sign-in using third-parties like Google and Facebook. What must be in place such that the app can obtain temporary credentials to access DynamoDB?* * Multi-factor authentication must be used to access DynamoDB * AWS CloudTrail needs to be enabled to audit usage. * An IAM role allowing the app to have access to DynamoDB * The user must additionally log into the AWS console to gain database access.

*An IAM role allowing the app to have access to DynamoDB* The user will have to assume a role that has permissions to interact with DynamoDB. ----------------------------------- Option A is incorrect, Multi-factor authentication is available, but not required. Option B i incorrect, CloudTrail is recommended for auditing but is not required. Option D is incorrect, A second log-in event to the management console is not required.

*Your development team has created a web application that needs to be tested on VPC. You need to advise the IT admin team on how they should implement the VPC to ensure the application can be accessed from the Internet. Which of the following would be part of the design.* (Choose 3) * An Internet Gateway attached to the VPC * A NAT gateway attached to the VPC * Route table entry added for the Internet gateway * All instances launched with a public IP

*An Internet Gateway attached to the VPC* *Route table entry added for the Internet Gateway* *All instances launched with a public IP* ----------------------------------- Option B is incorrect since this should be used for communication of instances in the private subnet to the Internet.

*Your supervisor asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company's on-premise data center. Which of the following would you include in the architecture?* * An SQS queue as the messaging component between the Instances and servers * An SNS topic as the messaging component between the Instances and servers * An Elastic Load balancer to distribute requests to your EC2 Instance * Route 53 resource records to route requests based on failure

*An SQS queue as the messaging component between the Instances and servers* Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. ----------------------------------- Option B is incorrect since this is a notification service. Option C is incorrect since there is no mention in the question on adding any fault tolerance. Option D is incorrect since there is no mention in the question on adding any failure detection.

*You are planning on hosting a static website on an EC2 Instance. You need to ensure that the environment is highly available and scalable to meet demand. Which of the below aspects can be used to create a highly available environment?* (Choose 3) * An auto scaling group to recover from EC2 instance failures. * Elastic Load Balancer * An SQS queue * Multiple Availability Zones

*An auto scaling group to recover from EC2 instance failures.* *Elastic Load Balancer* *Multiple Availability Zones* ELB which is placed in front of the users which leps in directing the traffic to the EC2 Instances. The EC2 Instances which are placed as part of an AutoScaling Group. And then you have multiple subnets which are mapped to multiple availability zones. ----------------------------------- For a static web site, the SQS is not required to build such an environment. If you have a system such as order processing systems, which has that sort of queuing of requests, then that could be a candidate for using SQS Queues.

*What happens if you delete an IAM role that is associated with a running EC2 instance?* * Any application running on the instance that is using the role will be denied access immediately. * The application continues to use that role until the EC2 server is shut down. * The application will have the access until the session is alive. * The application will continue to have access.

*Any application running on the instance that is using the role will be denied access immediately.* The application will be denied access.

*Your Development team wants to start making use of EC2 Instances to host their Application and Web servers. In the space of automation, they want the Instances to always download the latest version of the Web and Application servers when they are launched. As an architect, what would you recommend for this scenario?* * Ask the Development team to create scripts which can be added to the User Data section when the instance is launched. * Ask the D

*Ask the Development team to create scripts which can be added to the User Data section when the instance is launched.* When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two type of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).

*You are in the process of deploying an application on an EC2 instance. The application must call AWS APIs. What is the secured way of passing the credentials to the application?* * Use a DynamoDB to store the API credentials. * Assign IAM roles to the EC2 instance. * Keep API credentials as an object in Amazon S3. * Pass the API credentials to the instance using the instance user data.

*Assign IAM roles to the EC2 instance.* You need to use IAM roles to pass the credentials to the application in a secure way. ----------------------------------- If you store the API credentials in DynamoDB or S3, how will the EC2 instance connect to DynamoDB or S3, which has to be via IAM roles? Passing the API credentials to the instance using the instance user data is not a solution because API credentials may change or you may change EC2 servers.

*Your development team just finished developing an application on AWS. This application is created in .NET and is hosted on an EC2 instance. The application currently accesses a DynamoDB table and is now going to be deployed to production. Which of the following is the ideal and most secure way for the application to access the DynamoDB table?* * Pass API credentials to the instance using instance user data. * Store API credentials as an object in Amazon S3. * Embed the API credentials into your JAR files. * Assign IAM roles to the EC2 instances.

*Assign IAM roles to the EC2 instances.* You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. It is not a best practice to use IAM credentials for any production based application. It is always a good practice to use IAM Roles

*You want to have a static public IP address for your EC2 instance running in a public subnet. How do you achieve this?* * Use a public IP address. * Attach an EIP to the instance. * Use a private IP address. * Attach an elastic load balancer with the EC2 instance and provide the ELB address.

*Attach an EIP to the instance.* EIP is going to provide you with a static IP address. ------------------------- Even if you use the public IP address, it is dynamic, and it is going to change whenever the instance is stopped. You can't use a private IP address since requirement is to use a static public IP address. Moreover, a private IP address is dynamic. Even if you assign a public IP address to ELB and shut down the instance, a new IP address will be issued to the instance. You need an EIP.

*You have an EC2 instance placed inside a subnet. You ahve created the VPC from scratch, and added EC2 Instance to the subnet. It is required to ensure that this EC2 Instance has complete access to the Internet, since it will be used by users on the Internet. Which of the following options would help accomplish this?* * Launch a NAT Gateway and add routes for 0.0.0.0/0 * Attach a VPC Endpoint and add routes for 0.0.0.0/0 * Attach and Internet Gateway and add routes for 0.0.0.0/0 * Deploy NAT Instances in a public subnet and add routes for 0.0.0.0/0

*Attach an Internet Gateway and add routes for 0.0.0.0/0* An Internet Gateway is horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

*You plan on creating a VPC from scratch and launching EC2 Instances in the subnet. What should be done to ensure that the EC2 Instances are accessible from the Internet?* * Attach an Internet Gateway to the VPC and add a route for 0.0.0.0/0 to the Route table. * Attach an NAT Gateway to the VPC and add a route for 0.0.0.0/0 to the Route table. * Attach an NAT Gateway to the VPC and add a route for 0.0.0.0/32 to the Route table. * Attach an Internet Gateway to the VPC and add a route for 0.0.0.0/32 to the Route table.

*Attach an Internet Gateway to the VPC and add a route for 0.0.0.0./0 to the Route table.* When you create a VPC from scratch, there is no default internet gateway attached, hence you need to create and attach one.

*A company has assigned two web server instances to an Elastic Load Balancer inside a custom VPC. However, the instances are not accessible via URL to the elastic load balancer serving the web app data from the EC2 instances. How might you resolve the issue so that your instances are serving the web app data to the public internet?* * Attach an Internet Gateway to the VPC and route it to the subnet. * Add an Elastic IP address to the instance. * Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet. * None of the above.

*Attach an Internet Gateway to the VPC and route it to the subnet.* If the Internet Gateway is not attached to the VPC, which is a prerequisite for the instances to be accessed from the Internet, the instances will not be reachable.

*A data processing application in AWS must pull data from an Internet service. A Solutions Architect is to design a highly available solution to access this data without placing bandwidth constraints on the application traffic. Which solution meets these requirements?* * Launch a NAT gateway and add routes for 0.0.0.0/0 * Attach a VPC endpoint and add routes for 0.0.0.0/0 * Attach an Internet gateway and add routes for 0.0.0.0/0 * Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0

*Attach an Internet gateway and add routes for 0.0.0.0/0* An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic. ----------------------------------- *NOTE:* NAT gateway is also highly available architecture and is used to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. It can only scale up to 45 Gbps. NAT instances bandwidth capability depends upon the instance type. VPC Endpoints are used to enable private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, Network Address Translation (NAT) devices, or firewall proxies. So it cannot be used to connect to internet. *NOTE:* Network Address Translation (NAT) gateway is recommended for the intstances in a private subnet to connect to the internet or other AWS services. As we don't have any instructions for applications that are in private subnet and the question is talking about the entire application traffic rather than the specific instance (inside private subnet), so NAT can't be the answer to this question.

*You have been tasked to create a public subnet for a VPC. What should you do to make sure the subnet is able to communicate to the Internet?* (Choose two.) * Attach a customer gateway. * In the AWS console, right-click the subnet and then select the Make Public option. * Attach an Internet gateway to the VPC. * Create a route in the route table of the subnet allowing a route out of the Internet gateway.

*Attach an Internet gateway to the VPC.* *Create a route in the route table of the subnet allowing a route out of the Internet gateway.* When you connect an Internet gateway to a VPC, it becomes a public subnet, but it must have a route so that traffic can flow in and out of the subnet. ------------------------- A customer gateway is used for a VPN connection, and there is no concept of right-clicking.

*You have created a customer subnet, but you forgot to add a route for Internet connectivy. As a result, all the web server running in that subnet don't have any Internet access. How will you make sure all the web servers can access the Internet?* * Attached a virtual private gateway to the subnet for destination 0.0.0.0/0 * Attached an Internet gateway to the subnet for destination 0.0.0.0/0 * Attached an Internet gateway to the security group of EC2 instance for destination 0.0.0.0/0 * Attache a VPC endpoint to the subnet

*Attached an Internet gateway to the subnet for destination 0.0.0.0/0* You need to attached an Internet gateway so that the subnet can talk with the Internet. ------------------------- A virtual private gateway is used to create a VPN connection. You cannot attach an Internet gateway to an EC2 instance. It has to be at the subnet level. A VPC endpoint is used so S3 or DynamoDB can communicate with Amazon VPC, bypassing the Internet.

*An organizational hosts a multi-language website on AWS, which is served using CloudFront. Language is specified in the HTTP request as shown below: http://d11111f8.cloudfront.net/main.html?language=de http://d11111f8.cloudfront.net/main.html?language=en http://d11111f8.cloudfront.net/main.html?language=es How should AWS CloudFront be configured to delivered cache data in the correct language?* * Forward cookies to the origin * Based on query string parameters * Cache objects at the origin * Serve dynamic content

*Based on query string parameters* Since language is specified in the query string parameters. CloudFront should be configured for the same.

*You are about to delete the second snapshot of an EBS volume which had 10 GiB of data at the time when the very first snapshot was taken. 6GiB of that data has changed before the second snapshot was created. An additional 2 GiB of the data have been added before the third and final snapshot was taken. Which of the following statements about that is correct?* * Each EBS volume snapshot is a full backup of the complete data and independent of other snapshots. You can go ahead and delete the second snapshot to save costs. After that, you are charged only 22 GiB of data for the two remaining snapshots. * After deletion, the total storage required for the two remaining snapshots is 12 GiB; 10 GiB for the first and 2 GiB for the last snapshot. * Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. Therefore, you can only delete them in reverse chronological order, i.e. starting with third snapshot and then the second one. * Before deletion, the total storage required for the three snapshots was 18 GiB of which the second one had 6 GiB of data. After the deletion of that second snapshot, you are still charged for storing 18 GiB of data -10 GiB from the very first snapshot and 8 GiB (6 + 2) of data from the last snapshot.

*Before deletion, the total storage required for the three snapshots was 18 GiB of which the second one had 6 GiB of data. After the deletion of that second snapshot, you are still charged for storing 18 GiB of data -10 GiB from the very first snapshot and 8 GiB (6 + 2) of data from the last snapshot.* When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains or references all of the information needed to restore your data (from the moment when the snapshot was take) to a new EBS volume.

*A new VPC with CIDR range 10.10.0.0/16 has been setup with a public and private subnet, Internet Gateway and a custom route table have been created and a route has been added with the 'Destination' as '0.0.0.0/0' and the 'Target' with Internet Gateway (igw-id). A new Linux EC2 instance has been launched on the public subnet with the auto-assign public IP option enabled, but when trying to SSH into the machine, the connection is getting failed. What could be the reason?* * Elastic IP is not assigned. * Both the subnets are associated with the main route talbe, no subnet is explicitly associated with the custom route table which has internet gateway route. * Public IP address is not assigned. * None of the above.

*Both the subnets are associated with the main route table, no subnet is explicitly associated with the custom route table which has internet gateway route.* Option B, whenever a subnet is createdby default, it is associated with the main route table. We need to explicitly associate the subnet to the custom route table if different routes are required for main custom route tables. ----------------------------------- Option A. An Elastic IP address is a public IPv4 address with which you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet; for example, to connect to your instance from your local computer. For your problem statement, EC2 is launched with Auto-assign public IP enabled. So, since public IP is available, Elastic IP is not a necessity to connect from internet. Option C, the problem statement clearly state that EC2 is launched with Auto-assign Public IP enabled, so this option cannot be true.

*Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels?* * Corporate * Business * Free Tier * Developer * Enterprise

*Business* *Developer* *Enterprise* The correct answers are Enterprise, Business, Developer. The 4th level is basic. ----------------------------------- Remember that Free Tier is a billing rebate not an account type or support level.

*How can you make a cluster of an EC2 instance?* * By creating all the instances within a VPC * By creating all the instances in a public subnet * By creating all the instances in a private subnet * By creating a placement group

*By creating a placement group* You can create the placement group within the VPC or within the private or public subnet.

*You have created a web server in the public subnet, and now anyone can access the web server from the Internet. You want to change this behavior and just have the load balancer talk with the web server and no one else. How do you achieve this?* * By removing the Internet gateway * By adding the load balancer in the route table * By allowing the load balancer access in the NACL of the public subnet * By modifying the security group of the instance and just having the load balancer talk with the web server

*By modifying the security group of the instance and just having the load balancer talk with the web server* By removing the Internet gateway, a web connection via the load balancer won't be able to reach the instance. You can add the route for a load balancer in the route table. NACL can allow or block certain traffic. In this scenario, you won't be able to use NACL.

*You have created a VPC with two subnets. The web servers are running in a public subnet, and the database server is running in a private subnet. You need to download an operating system patch to update the database server. How you are going to download the patch?* * By attaching the Internet Gateway to the private subnet temporarily * By using a NAT gateway * By using peering to another VPC * By changing the security group of the database server and allowing Internet access

*By using a NAT gateway* The database server is running in a private subnet. Anything running in a private subnet should never face the Internet directly. ----------------------------------- Even if you peer to another VPC, you can't really connect to the Internet without using a NAT instance or a NAT gateway. Even if you change the security group of the database server and allow all incoming traffic, it still won't be able to connect to the Internet because the database server is running in the private subnet and the private subnet is not attached to the Internet gateway.

*How can your VPC talk with DynamoDB directly?* * By using a direct connection * By using a VPN connection * By using a VPN endpoint * By using an instance in the public subnet

*By using a VPN endpoint* Direct Connect and VPN are used to connect your corporate data center to AWS. DynamoDB is a service running in AWS. Even if you use an instance in a public subnet to connect with DynamoDB, it is still going to use the Internet. In this case, you won't be able to connect to DynamoDB, bypassing the Internet.

*Your large scientific organization needs to use a fleet of EC2 instances to perform high-performance, CPU-intensive calculations. Your boss asks you to choose an instance type that would best suit the needs of your organization. Which of the following instance types should you recommend?* * C4 * D2 * R3 * M3

*C4* C-class instances are recommended for high-performance front-end fleets, web servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding. The best answer would be to use a C4 instance.

*You are running your company's web site on AWS. You are using a fleet of EC2 servers along with ELB and Auto Scaling and S3 to host the web site. Your web site hosts a lot of photos and videos as well as PDF files. Every time a user looks at a video or photo or a PDF file, it is served from S3 or from EC2 servers, and sometimes this causes performance problems. What can you do to improve the user experience for the end users?* * Cache all the static content using CloudFront. * Put all the content in S3 One Zone. * Use multiple S3 buckets. Don't put more than 1,000 objects in one S3 bucket. * Add more EC2 instances for beter performance.

*Cache all the static content using CloudFront* In this case, caching the static content with CloudFront is going to provide the best performance and better user experience. ------------------------- S3 One Zone is going to provide less durability. Using the multiple buckets is not going to solve performance problem. Even if you use more EC2 servers, the content will still be served from either S3 or EC2 servers, which is not going to solve the performance problem.

*A website runs on EC2 Instances behind an Application Load Balancer. The instances run in an Auto Scaling Group across multiple Availability Zones and deliver large files that are stored on a shared Amazon EFS file system. The company needs to avoid serving the files from EC2 Instances every time a user requests these digital assets. What should the company do to improve the user experience of their website?* * Move the digital assets to Amazon Glacier. * Cache static content using CloudFront. * Resize the images so that they are smaller. * Use reserved EC2 Instances.

*Cache the static content using CloudFront* Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content such as .html, .css, .js, and image files to your users. CloudFront delivers your content through a worldwide network of data center called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency, CloufFront delivers it immediately. ----------------------------------- Glacier is not used for frequent retrievals. So Option A is not a good solution. Options C & D scenarios will also not help in this situation.

*You have fleet of EC2 instances hosting your application and running on an on-demand model that is billed hourly. For running your application, you have enabled Auto Scaling for scaling up and down per the workload, and while reviewing the Auto Scaling events, you notice that the application is scaling up and down multiple times in the same hour. What would you do to optimize the cost while preserving elasticity at the same time?* (Choose two.) * Use a reserved instance instead of an on-demand instance to optimize cost. * Terminate the longest-running EC2 instance by modifying the Auto Scaling group termination policy. * Change the Auto Scaling group cool-down timers. * Terminate the newest running EC2 instance by modifying the Auto Scaling group termination policy. * Change the CloudWatch alarm period that triggers and Auto Scaling scale-down policy.

*Change the Auto Scaling group cool-down timers.* *Change the CloudWatch alarm period that triggers an Auto Scaling scale-down policy.* Since the EC2 instances are billed on the hourly basis, if the instances are going down a couple of times an hour, you have to pay for a full hour every time one goes down. So, if an instance goes down four times during an hour, you will have to pay for four hours; therefore, you should change the Auto Scaling cool-down time so that the EC2 instance remains up at least for an hour. Similarly, you should change the CloudWatch alarm period so that it does not trigger the Auto Scaling policy before an hour is up. ----------------------------------- A, B, and D are incorrect. In this case, you can't use a reserved instance since you don't know what the optimal number of EC2 instances that you will always be running. B is incorrect because even if you terminate the longest-running EC2 instance, it won't help you to optimize cost. D is incorrect because by terminating the newest running EC2 instance, you won't save additional cost.

*You have a fleet of EC2 instances hosting your application and running on an on-demand model that is billed hourly. For running your application, you have enabled Auto Scaling for scaling up and down per the workload, and while reviewing the Auto Scaling events, you notice that the application is scaling up and down multiple times in the same hour. What would you do to optimize the cost while preserving elasticity at the same time?* (Choose two.) * Use a reserved instance instead of an on-demand instance to optimize cost. * Terminate the longest-running EC2 instance by modifying the Auto Scaling group termination policy. * Change the Auto Scaling group cool-down timers. * Terminate the newest running EC2 instnace by modifying the Auto Scaling group termination policy. * Change the CloudWatch alarm period that triggers the Auto Scaling scale-down policy.

*Change to Auto Scaling group cool-down timers.* *Change the CloudWatch alarm period that triggers an Auto Scaling scale-down policy.* Since the EC2 instances are billed on an hourly basis, if the instances are going down a couple of times an hour, you have to pay for a full hour every time one goes down. So, if an instance goes down four times during an hour, you will pay for four hours; therefore, you should change the Auto Scaling cool-down time so that the EC2 instance remains up at least for an hour. Similarly, you should change the CloudWatch alarm period so that it does not trigger the Auto Scaling policy before an hour is up. ------------------------- In this case, you can't use a reserved instance since you don't know what the optimal number of EC2 instances is that you will always be running. B is incorrect because even if you terminate the longest-running EC2 instance, it won't help you optimize cost. D is incorrect because by terminating the newest running EC2 instance you won't save additional cost.

*Currently, you're helping design and architect a highly available application. After building the initial environment, you discover that a part of your application does not work correctly until port 443 is added to the security group. After Adding port 443 to the appropriate security group, how much time will it take before the changes are applied and the application begins working correctly?* * Generally, it takes 2-5 minutes in order for the rules to propagate. * Immediately after a reboot of the EC2 Instances belong to that security group. * Changes apply instantly to the security group, and the application should be able to respond to 443 requests. * It will take 60 seconds for the rules to apply to all Availability Zones within the region.

*Changes apply instantly to the security group, and the application should be able to respond to 443 requests.* Some systems for setting up firewalls let you filter out source ports. Security groups let you filter only on destination ports. When you add or remove rule, they automatically applied to all instances associated with the security group.

*Which two automation platforms are supported by AWS OpsWorks?* * Chef * Puppet * Slack * Ansible

*Chef* *Puppet* AWS OpsWorks is a configuration management service instances of Chef and Puppet. AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate, which is a suite of automation service that hosts Chef Automate, which is a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management. ------------------------- OpsWorks does not support Slack or Ansible.

*You know that you need 24 CPUs for your production server. You also know that your compute capacity is going to remain fixed until next year, so you need to keep the production server up and running during that time. What pricing option would you go with?* * Choose the spot instance * Choose the on-demand instance * Choose the three-year reserved instance * Choose the one-year reserved instance

*Choose the one-year reserved instance* You won't choose a spot instance because the spot instance can be taken away at any time by giving notice. On-demand won't give you the best pricing since you know you will be running the server all the time for next year. Since you know the computation requirement is only for one year, you should not go with a three-year reserved instance. Rather, you should go for a one-year reserved instance to get the maximum benefit.

*You want to host an application on EC2 and want to use the EBS volume for storing the data. One of the requirements is to encrypt the data at rest. What is the cheapest way to achieve this?* * Make sure when the data is written to the application that it is encrypted. * Configure encryption when creating the EBS volume. That way, data will be encrypted at rest. * Copy the data from EBS to S3 and encrypt the S3 buckets. * Use a third-party tool to encrypt the data.

*Configure encryption when creating the EBS volume. That way, data will be encrypted at rest.* You can configure the encryption option for EBS volumes while creating them. When data is written on an encrypted volume, it will be encrypted. ------------------------------ The requirement is that the encryption of data needs to happen during rest, and when the application is writing the data, it is not at rest. If you copy the data from EBS to S3, then the data residing in S3 will be encrypted. The use case is to encrypt the data residing in the EBS volume. Since EBS provides the option to encrypt the data, why spend money on a third-party tool?

*A consulting firm repeatedly builds large architectures for their customers using AWS resources from several AWS service including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC. The consultants have architecture diagrams for each of their architectures, and you are frustrated that they cannot use them to automatically create their resources. Which service should provide immediate benefits to the organizations?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*Cloud Formation* AWS CloudFormation: This supplements the requirement in the question and enables consultants to use their architecture diagram to construct CloudFormation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Service resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation take care of provisioning and configuring those resources for you. When you are building a large architecture repeatedly, you can use the cloud formation template so that you create or modify an existing AWS CloudFormation template. A template describes all of your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack. By using AWS CloudFormation, you easily manage a collection of resources as a single unit, whenever working with more number of AWS resources as a single unit whenever workign with more number of AWS resources together, cloud formation is the best option. ----------------------------------- AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js etc. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. In question mentioned that *A consulting firm repeatedly builds large architectures for their customers using AWS resources from several AWS service including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC.*

*You have deployed all your applications in the California region, and you are planning to use the Virginia region for DR purposes. You have heard about infrastructure as code and are planning to use that for deploying all the applications in the DR region. Which AWS service will help you achieve this?* * CodeDeploy * Elastic Beanstalk * OpsWorks * CloudFormation

*CloudFormation* AWS provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all region and accounts. The text file where all the information is stored is also known as infrastructure as code. ----------------------------------- AWS CodeDeploy is a service that automates software deployments to a variety of compute services including Amazon EC2, AWS Lambda, and instances running on-premises. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*One of your developers has made an API call that has changed the state of your resource that was running fine earlier. Your security team wants to do an audit now. How do you track who created the mistake?* * CloudWatch * CloudTrail * SNS * STS

*CloudTrail* CloudTrail allows you to monitor all the API calls. ------------------------- Cloudwatch does not monitor API calls, SNS is a notification service, and STS is used for gaining temporary credentials.

*You have configured a rule that whenever the CPU utilization of your EC2 goes up, Auto Scaling is going to start a new server for you. Which tool is Auto Scaling using to monitor the CPU utilization?* * CloudWatch metrics. * Output of the top command. * The ELB health check metric. * It depends on the operating system. Auto Scaling uses the OS-native tool to capture the CPU utilization.

*CloudWatch metrics.* Auto Scaling relies on the CloudWatch metrics to find the CPU utilization. Using the top command or the native OS tools, you should be able to identify the CPU utilization, but Auto Scaling does not use that.

*You have implemented AWS Cognito services to require users to sign in and sign up to your app through social identity providers like Facebook, Google, etc. Your marketing department want users to try out the app anonymously and thinks the current long-in requirement is excessive and will reduce demand for products and services offered through the app. What can you offer the marketing department in this regard?* * It's too much of a security risk to allow unauthenticated users access to the app. * Cognito Identity supports guest users for the ability to enter the app and have limited access. * A second version of the app will need to be offered for unauthenticated users. * This is possible only if we remove the authentication from everyone.

*Cognito Identy supports guest users for the ability to enter the app and have limited access.* Amazon Cognito Identity Pools can support unauthenticated identities by providing a unique identifier, and AWS credentials for users who do not authenticate and with an identity provider. Unauthenticated users can be associated with a role that has limited access to resources as compared to a role for authenticated users. ----------------------------------- Option A is incorrect, Cognito will all unauthenticated users without being a security risk. Option C is incorrect, this is not necessary as unauthenticated users are allowed using Cognito. Option D is incorrect, Cognito supports both authenticated and unauthenticated users.

*You want to run a mapreduce job (a part of the big data workload) for a noncritical task. Your main goal is to process it in the most cost-effective way. The task is throughput sensitive but not at all mission critical and can take a longer time. Which type of storage would you choose?* * Throughput Optimized HDD (st1) * Cold HDD (sc1) * General-Purpose SSD (gp2) * Provisioned IOPS (io1)

*Cold HDD (sc1)* Since the workload is not critical and you want to process it in the most cost-effective way, you should choose Cold HDD. Though the workload is throughput sensitive, it is not critical and is low priority; therefore, you should not choose st1. gp2 and io1 are more expensive than other options like st1.

*An application hosted on EC2 Instances has its promotional campaign due to start in 2 weeks. There is a mandate from the management to ensure that no performance problems are encountered due to traffic growth during this time. Which of the following must be done to the Auto Scaling Group to ensure this requirement can be fulfilled?* * Configure Step scaling for the Auto Scaling Group. * Configure Dynamic Scaling and use Target tracking scaling Policy. * Configure Scheduled scaling for the Auto Scaling Group. * Configure Static scaling for the Auto Scaling Group.

*Configure Dynamic Scaling and use Target tracking scaling Policy.* If you are scaling is based on a metric, which is an utilization metric that increases or decreases proportionally to the number of instances in the Auto Scaling group, we recommend that you use a target tracking scaling policy instead. In Target tracking scaling policies you select a predefined metric or configure a customized metric, and set a target value. EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. Scheduled scaling works better when you can predict the load changes and also when you know how low you need to run. Here in our scenario we just know that there will be a heavy traffic during the campaign period (period is not specified) but not sure about the actual traffic. Don't have any history to predict it either.

*An application is hosted on EC2 Instances. There is a promotional campaign due to start in two weeks for the application. There is a mandate from the management to ensure that no performance problems are encountered due to traffic growth during this time. What action must be taken on the Auto Scaling Group to ensure this requirement can be fulfilled?* * Configure step scaling for the Auto Scaling Group. * Configure Dynamic scaling for the Auto Scaling Group. * Configure Scheduled scaling for the Auto Scaling Group. * Configure Static scaling for the Auto Scaling Group.

*Configure Dynamic scaling for the Auto Scaling Group.* If you are scaling based on a metric that is a utilization metric that increases or decreases proportionally to the number of instances in the Auto Scaling group, we recommend that you use a target tracking scaling policy instead. In Target tracking scaling policies you select a predefined metric or configure a customized metric, and set a target value. EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. Scheduled scaling works better when you can predict the load changes and also when you know how long you need to run. Here in our scenario we just know that there will be a heavy traffic during the campaign period (period is not specified) but not sure about the actual traffic. Don't have any history to predict it either. ----------------------------------- *NOTE:* In this particular question, *Dynamic Scaling* is most appropriate solution than *scheduled Scaling* In the question we are mentioning that a marketing campaign will start in 2 weeks. We haven't mentioned that how long it is going to run. So if we go for Scheduled scaling we don't know how long we are going to run. So we cannot specify the Start time or End time. More over scheduled scaling works better when you can predict the load changes. Here in our scenario we just know that there will be a heavy traffic during the campaign period but not sure about the actual traffic. Don't have any history to predict it either. *But if we go for Dynamic Scaling and use Target tracking scaling Policy type*, it Increases or decreases? The current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home - you select a temperature and the thermostat does the rest.

*An architecture consists of the following:* *a) An active/passive infrastructure hosted in AWS* *b) Both infrastructures comprise ELB, Auto Scaling, and EC2 resources* *How should Route 53 be configured to ensure proper failover in case the primary infrastructure were to go down?* * Configure a primary routing policy. * Configure a weighted routing policy. * Configure a Multi-Answer routing policy. * Configure a failover routing policy.

*Configure a failover routing policy.* You can create an active-passive failover configuration by using failover records. Create a primary and secondary failover record that have the same name and type, and associate a health check with each. Various Route 53 routing policies are as follows: *Simple routing policy* - Use for a single resource that performs a given function for the domain, for example a web server that serves content for the example.com website. *Failover routing policy* - Use when you want to configure the active-passive failover. *Geolocation routing policy* - Use when you want to route traffic based on the location of your users. *Geoproximity routing policy* - Used when you want to route traffic based on the location of your resources and optionally, shift traffic from resources in one location to resources in another. *Latency routing policy* - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. *Multivalue answer routing policy* - Use when you want to Route 53 to respond to DNS queries with up to eight healhty records selected at random. *Weighted routing policy* - Use to route traffic to multiple resources in proportions that you specify.

*You are running your application on a fleet of EC2 instances across three AZs in the California region. You are also using elastic load balancing along with Auto Scaling to scale up and down your fleet of EC2 servers. While doing the benchmarks before deploying the application, you have found out that if the CPU utilization is around 60 percent, your application performs the best. How can you make sure that your EC2 servers are always running with 60 percent CPU utilization?* * Configure simple scaling with a step policy in Auto Scaling. * Limit the CPU utilization to 60 percent at the EC2 server level. * Configure a target tracking scaling policy. * Since the CPU load is dynamically generated by the application, you can't control it.

*Configure a target tracking scaling policy.* A target tracking scaling policy can keep the average aggregate CPU utilization on your Auto Scaling group at 60 percent. ----------------------------------- A simple scaling policy with a step allows a target based on multiple metrics. In this case, you need to target just one metric. If you limit the CPU utilization to 60 percent at the EC2 server level, you are adding additional maintenance overhead.

*You are developing a document management application, and the business has two key requirements. The application should be able to maintain multiple versions of the document, and one year the documents should be seamlessly archived. Which AWS service can serve this purpose?* * EFS * EBS * RDS * S3

*S3* S3 provides the versioning capability, and using a lifecycle policy, you can archive files to Glacier. ------------------------- If you use EFS or EBS, it is going to cost a lot, and you have to write code to get the versioning capability. You are still going to use Glacier for archiving. RDS is a relational database service and can't be used to store documents.

*A company is planning to use AWS Simple Storage Service for hosting their project documents. At the end of the project, the documents need to be moved to archival storage. Which of the following implementation steps would ensure the documents are managed accordingly?* * Adding a bucket policy on the S3 bucket * Configuring lifecycle configuration rules on the S3 bucket * Creating an IAM policy for the S3 bucket * Enabling CORS on the S3 bucket

*Configuring lifecycle configuration rules on the S3 bucket* Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions - In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf.

*Development teams in your organization use S3 buckets to store log files for various application hosted in AWS development environments. The developers intend to keep the log for a month for troubleshooting purposes, and subsequently purge the logs. What feature will enable this requirement?* * Adding a bucket policy on the S3 bucket. * Configuring lifecycle configuration rules on the S3 bucket. * Creating an IAM policy for the S3 bucket. * Enabling CORS on the S3 bucket.

*Configuring lifecycle configuration rules on the S3 bucket.* Lifecycle configuration enables you to specify the LIfecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for AMazon S3 to apply to a group of objects. These actions can be classified as follows: *Transition actions* - In which you define when object transition to another storage class. For Example, you may choose to transition objects to STANDARD_IA (IA, for infrequent class) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration actions* - In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf. ----------------------------------- Option D is for Sharing resources between regions.

*A company is running three production server reserved EC2 Instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems?* * Consider using On-demand instances instead of Reserved EC2 instances. * Consider not using a Multi-AZ RDS deployment database. * Consider using Spot instances instead of Reserved EC2 instances. * Consider removing the Elastic Load Balancer.

*Consider not using a Multi-AZ RDS deployment for the development database.* Multi-AZ databases are better for production environments rather than for development environments, so you can reduce costs by not using these for development environments. Amazon RDS Multi-AZ deployments provide enahanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instances and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your Db Instance remains teh same after a failover, your application can resume database operation without the need for manual adminstrative intervention. *Note:* Mission Critical system refers to production Instances and Databases. However, if you notice, they have a Multi-AZ RDS on Development environment also, which is not necessary. Because management always concerned about production environment shoud be perfect. In order to reduce the cost, we can disable the Multi-AZ RDS for Development environment and keep it only for Production environment.

*A company is planning on using the AWS Redshift service. The Redshift service and data on it would be used continuously for the next 3 years as per the current business plan. Which of the following would be the most cost-effective solution in this scenario?* * Consider using On-demand instances for the Redshift Cluster * Enable Automated backup * Consider using Reserved Instances for the Redshift Cluster * Consider not using a cluster for the Redshift nodes

*Consider using Reserved Instances for the Redshift Cluster.* If you intend to keep your Amazon Redshift cluster running continuously for a prolonged period, you should consider purchasing reserved node offering. These offering provide significant savings over on-demand pricing, but they require you to reserve compute nodes and commit to paying for those nodes for either a one-year or three-year duration.

*Instances in your private subnet hosted in AWS, need access to important documents in S3. Due to the confidential nature of these documents, you have to ensure that this traffic does not traverse through the internet. As an architect, how would you implement this solution?* * Consider using a VPC Endpoint. * Consider using an EC2 Endpoint. * Move the instance to a public subnet. * Create a VPN connection and access the S3 resources from the EC2 Instance.

*Consider using a VPC Endpoint.* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other services do not leave the Amazon network.

*You have a requirement to host a web based application. You need to enable high availability for the application, so you create an Elastic Load Balancer and place the EC2 Instances behind the Elastic Load Balancer. You need to ensure that users only access the application via the DNS name of the load balancer. How would you design the network part of the application?* (Choose 2) * Create 2 public subnets for the Elastic Load Balancer * Create 2 private subnets for the Elastic Load Balancer * Create 2 public subnets for the backend instances * Create 2 private subnets for the backend instances

*Create 2 public subnets for the Elastic Load Balancer* *Create 2 private subnets for the backend instances* You must create public subnets in the same Availability Zones as the private subnets that are used by your private instances. Then associate these public subnets to the internet-facing load balancer. ----------------------------------- Option B is incorrect since the ELB needs to be placed in the public subnet to allow access from the Internet. Option C is incorrect based on security issues. Private subnet gives us the better secuirty from the attacks.

*A CloudFront distribution is being used to distribute content from an S3 bucket. It is required that only a particular set of users get access to certain content. How can this be accomplished?* * Create IAM Users for each user and then provide access to the S3 bucket content. * Create IAM Groups for each set of users and then provide access to the S3 bucket content. * Create CloudFront signed URLs and then distribute these URLs to the users. * Use IAM Policies for the underlying S3 buckets to restrict content.

*Create CloudFront signed URLs and then distribute thse URLs to the users.* Many companies that distribut content via the internet want to restrict access to documents, business data, and media streams, or content that is intended for selected users. For example, users who have paid a fee, to securely serve this private content using CloudFront, you can do the following... - Require that your users access your pviate content by using a special CloudFront signed URLs or signed cookies. - Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.

*Your company has a set of EC2 Instances hosted on the AWS Cloud. As an architect you have been told to ensure that if the status of any instances is related to a failure, then the instances are automatically restarted. How can you achieve this in the MOST efficient way possible?* * Create CloudWatch alarms that stop and start the isntance based off the status check alarms. * Write a script that queries the EC2 API for each instance status check. * Write a script that periodically shuts down and starts instances based on certain stats. * Implement a third-party monitoring tool.

*Create CloudWatch alarms that stop and start the instance based off a status check alarms* Use Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs. All other options are possible, but would just be extra maintenance overhead.

*You have a set of EC2 Instances that support an application. They are currently hosted in the US Region. In the event of a disaster, you need a way to ensure that you can quickly provision the resources n another region. How could this be accomplished?* (Choose 2) * Copy the underlying EBS Volumes to the destination region. * Create EBS Snapshots and then copy them to the destination region. * Create AMIs for the underlying instances. * Copy the metadata for the EC2 Instances to S3.

*Create EBS Snapshots and then copy them to the destination region.* *Create AMIs for the underlying instances.* AMIs can be used to create a snapshot or template of the underlying instance. You can copy the AMI to another region. You can also make snapshots of the volumes and then copy them to the destination region.

*You are working as an architect in your organization. You have peered VPC A as a requestor and VPC B as accepter and both VPCs can communicate with each other. Now you want resources in both the VPCs to reach out to the internet but anyone on the internet should not be able to reach resources within both the VPCs. Which of the following statements is true?* * Create a NAT Gateway on Requestor VPC (VPC A) and configure a route in Route table with NAT Gateway. VPC B can route to internet through VPC A NAT Gateway. * Create an Internet Gateway on Requestor VPC (VPC A) and configure a route in Route table with Internet Gateway. VPC B can route to internet through VPC A Internet Gateway. * Create a NAT Gateways on both VPCs and configure routes in respective route tables with NAT Gateways. * Create a NAT instance on Requestor VPC (VPC A). VPC B can route to internet through VPC A NAT Instance.

*Create NAT Gateways on both VPCs and configure routes in respective route tables with NAT Gateways.* ----------------------------------- For Option A, when NAT Gateway and configured for VPC A, the resources within VPC A can reach out to the internet. But, VPC B resources cannot reach to the internet through NAT Gateway created in VPC A although both VPCs are peering. This situation would cause transitive routing which is not supported in AWS routing. For Option B, Internet Gateways are for two way traffic. But the requirement is only for resources to reach out to internet, inbound traffic from internet should not be allowed. So Internet Gateway is not correct choice. For Option D, similar to Option A, this situation would cause transitive peering and hence not supported. NOTE: AWS recommends using NAT Gateway over NAT instance.

*Currently a company makes use of the EBS snapshots to back up their EBS Volumes. As a part of the business continuity requirement, these snapshots need to be made available in another region. How can this be achieved?* * Directly create the snapshot in the other region. * Create Snapshot and copy the snapshot to a new region. * Copy the snapshot to an S3 bucket and then enable Cross-Region Replication for the bucket. * Copy the EBS Snapshot to an EC2 instance in another region.

*Create Snapshot and copy the snapshot to a new region.* A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create a new volumes in the same region. For more information, follow the link on Restoring an Amazon EBS Volume from a Snapshot below. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. ----------------------------------- Option C is incorrect. Because, the snapshots which we are taking from the EBS are stored in AWS managed S3. We don't have the option to see the snapshot in S3. Hence, option C can't be the correct answer.

*You are developing a new mobile application which is expected to be used by thousands of customers. You are considering storing user preferences in AWS, and need to store to save the same. Each data item is expected to be 20KB in size. The solution needs to be cost-effective, highly available, scalable and secure. How would you design the layer?* * Create a new AWS MySQL RDS instance and store the user data there. * Create a DynamoDB table with the required Read and Write capacity and use it as the data layer. * Use Amazon Glacier to store the data. * Use an Amazon Redshift Cluster for managing the user preferences.

*Create a DynamoDB table with the required Read and Write capacity and use it as the data layer.* In this case, since data item is 20KB and given the fact that DynamoDB is an ideal data layer for storing user preferences, this would be an ideal choice. Also, DynamoDB is a highly scalable and available service.

*A company currently hosts its architecture in the US region. They now need to duplicate this architecture to the Europe region and extend the application hosted on this architecture to the new region. In order to ensure that users across the globe get the same seamless experience from either setup, what among the following needs to be done?* * Create a Classic Elastic Load Balancer setup to route traffic to both locations. * Create a weighted Route 53 policy to route the policy based on the weightage for each location. * Create an Application Elastic Load Balancer setup to route traffic to both locations. * Create a Geolocation Route 53 Policy to route the traffic based on the location.

*Create a Geolocation Route 53 Policy to route the traffic based on the location.* Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.

*Currently, you have a NAT Gateway defined for your private instances. You need to make the NAT Gateway highly available. How can this be accomplished?* * Create another NAT Gateway and place is behind an ELB. * Create a NAT Gateway in another Availability Zone. * Create a NAT Gateway in another region. * Use Auto Scaling groups to scale the NAT Gateway.

*Create a NAT Gateway in another Availability Zone.* If you have resources in multiple Availability Zones and they share one NAT Gateway, in the event that the NAT Gateway's Availability Zone is down, resources in other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT Gateway in each Availability Zone and configure your routing to ensure that resources use the NAT Gateway in the same Availability Zone.

*An EC2 instance in private subnet needs access on S3 bucket placed in the same region as that of the EC2 instance. The EC2 instance needs to upload and download bigger files to S3 bucket frequently. As an AWS solutions architect what quick and cost-effective solution would you suggest to your customers. You need to consider the fact that the EC2 instances being present in a private subnet, the customers do not want their data to be exposed over the internet.* * Place the S3 bucket in other public subnet of the same region and create VPC peering connection to this private subnet where the EC2 instance is placed. The traffic to upload and download files will go through secure Amazons private network. *The quick and cost effective solution would be, create an IAM role having access over S3 service & assign it to the EC2 instance. * Create a VPC endpoint for S3,, Use your route tables to control which instances can access resources in Amazon S3 via the endpoint. The traffic to upload and download files will go through the Amazon private network. * Private subnet can always access S3 bucket/service through the NAT Gateways or NAT instances, so there is no need for additional setup.

*Create a VPC endpoint for S3, Use your route tables to control which instances can access resources in Amazon S3 via the endpoint. The traffic to upload and download files will go through the Amazon private network.* This is the correction option, to be able to access S3 services placed in the same region as that of the VPC having EC2 instance present in the Private Subnet. You can create a VPC endpoint and update the route entry of the route table associated with the private subnet. This is a quick solution as well as cost effective as it will use the Amazons own private network. Hence, it won't expose the data over the internet. ----------------------------------- Option A is incorrect because the S3 service is region specific not AZ's specific, as the statement talks about pla will have a cost associng the S3 bucket in Public Subnet. Option B is incorrect, as this is indeed a quick solution but cost expensive as the EC2 instances from private or public subnet will communicate with the S3 services over its endpoint. And when endpoint is sued it uses the internet for download and upload and hence exposing the data over the internet. Besides, the number of requests will have a cost associated with it. Option D is incorrect, as this is certainly not a default setup unless we create a NAT Gateway or Instance. Even if they are there it's a cost expensive and exposes the data over the internet.

*You work as an architect for a company. An application is going to be deployed on a set of EC2 instances in a private subnet of VPC. You need to ensure that IT administrators can securely administer the instances in the private subnet. How can you accomplish this?* * Create a NAT gateway, ensure SSH access is provided to the NAT gateway. Access the Instances via the NAT gateway. * Create a NAT instance in a public subnet, ensure SSH access is provided to the NAT instance. Access the Instances via the NAT instance. * Create a bastion host in the private subnet. Make IT admin staff use this as a jump server to the backend instances. * Create a bastion host in the public subnet. Make IT admin staff use this as a jump server to the backend instances.

*Create a bastion host in the public subnet. Make IT admin staff use this as a jump server to the backend instances.* A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC) ----------------------------------- Option A and B are invalid because you would not route access via the NAT instance or the NAT gateway. Option C is incorrect since the bastion host needs to be in the public subnet.

*A company plans on deploying a batch processing application in AWS. Which of the following is an ideal way to host this application?* (Choose 2) * Copy the batch processing application to an ECS Container. * Create a docker image of your batch processing application. * Deploy the image as an Amazon ECS task. * Deploy the container behind the ELB.

*Create a docker image of your batch processing application.* *Deploy the image as an Amazon ECS tasks.* Docker containers are particularly suited for batch job workloads. Batch jobs are often short lived and embarrasingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, wuch as an Amazon ECS tasks.

*You are creating a new website which will both be public facing and private to users who have signed in. Due to the sensitive nature of the website you need to enable 2 factor authentication for when users sign in. Users will type in their user name and password and then the website will need to text the users a 4 digit code to their mobile devices using SMS. The user will then enter this code and will be able to sign in successfully. You need to do this as quickly as possible. Which AWS service will allow you to build this as quickly as possible.* * Create a new pool in Cognito and set the MFA to required when creating the pool. * Create a new group in identity access management and assign the users of the website to the new group. Edit the group settings to require MFA so that the users in the group will inherit the permissions automatically. * Enable multi factor authentication in identity Access Management for your root account. Set up an SNS trigger to SMS you the code for when the users sign in. * Design a MFA solution using a combination of API Gateway, Lambda, DynamoDB and SNS.

*Create a new pool in Cognito and set the MFA to required when creating the pool.* When creating the pool you can set MFA as Required. This will also enable use of Advanced security features which require MFA to function.

*You are designing a web page for event registration. Whenever a user signs up for an event, you need to be notified immediately. Which AWS service are you going to choose for this?* * Lambda * SNS * SQS * Elastic Beanstalk

*SNS* SNS allows you to send a text message. ------------------------- Lambda lets you run code without provisoning or managing servers, SQS is a managed message queuing service, and Elastic Beanstalk helps in deploying and managing web applications in the cloud.

*An application hosted in AWS allows users to upload video to an S3 bucket. A user is required to be given access to upload some video for a week based on the profile. How can this be accomplished in the best way possible?* * Create an IAM bucket policy to provide access for a week's duration. * Create a pre-signed URL for each profile which will last for a week's duration. * Create an S3 bucket policy to provide access for a week's duration. * Create an IAM role to provide access for a week's duration.

*Create a pre-signed URL for each profile which will last for a week's duration.* Pre-signed URL's are the perfect solution when you want to give temporary access to users for S3 buckets. So, whenever a new profile is created, you can create a pre-signed URL to ensure that the URL lasts for a week and allows users to upload the required objects.

You current setup in AWS consists of the following architecture: 2 public subnets, one subnet which has web servers accessed by users across the Internet and another subnet for the database server. Which of the following changes to the architecture adds a better security boundary to the resources hosted in this setup? * Consider moving the web server to a prviate subnet. * Create a private subnet and move the database server to a private subnet. * Consider moving both the web and database servers to a private subnet. * Consider creating private subnet and adding a NAT Instance to that subnet.

*Create a private subnet and move the database server to a private subnet.* The ideal setup is to host the web server in the public subnet so that it can be accessed by users on the Internet. The database server can be hosted in the private subnet.

*Your team has developed an application and now needs to deploy that application onto an EC2 Instance. This application interacts with a DynamoDB table. Which of the following is the correct and MOST SECURE way to ensure that the application interacts with the DynamoDB table.* * Create a role which has the necessary permissions and can be assumed by the EC2 instance. * Use the API credentials from an EC2 instance. Ensure the environment variables are updated with the API access keys. * Use the API credentials from a bastion host. Make the application on the EC2 Instance send requests via the bastion host. * Use the API credentials from a NAT Instance. Make the application on the EC2 Instance send requests via the NAT Instance

*Create a role which has the necessary permissions and can be assumed by the EC2 instance.* IAM roles are designed in such a way so that your applications can securely make API requests from your instances, without requiring you to manage the secuirty credentials that the application use. ----------------------------------- Option B, C, and D are invalid because it is not secure to use API credentials from any EC2 instance. The API credentials can be tampered with and hence is not the idea secure way to make API calls.

*You are hosting a web site AWS. The web site has two components: the web server that is hosted on the EC2 server in a public subnet and the database server that is hosted on RDS in a public subnet. Your user must use SSL to connect to the web servers, and the web server should be able to connect to the database server. What should you do in this scenario?* (Choose two.) * Create a security group for the web server. The security group should allow HTTPS traffic on port 443 for inbound traffic from 0.0.0.0/0 (anywhere). * Create a security group for the database server. The security group should allow HTTPS traffic from port 443 for inbound traffic from the web server security group. * Create an ACL for the web server's subnet, which should allow HTTPS on port 443 from 0.0.0.0/0 for inbound traffic and deny all outgoing traffic. * Create an ACL for the database server's subnet, which should allow 1521 inbound from a web server as inbound traffic and deny all outgoing traffic. * Create a security group for the database server. The security group should allow traffic on the 1521 TCP port from the web server security group.

*Create a security group for the web server. The security group should allow HTTPS traffic on port 443 for inbound traffic from 0.0.0.0/0 (anywhere).* *Create a security group for the database server. The security group should allow traffic on the 1521 TCP port from the web server security group.* By using security group, you can define the inbound and outbound traffic for a web server. Anyone should be able to connect to your web server from anywhere; thus, you need to allow 0.0.0.0/0. The web server is running on port 443; thus, you need to allow that port. On the database server, only the web server should be given access, and the port where the database operates is 1521. ------------------------- This question has two aspects. First, all users should be able to connect to the web server via SSL mode, and second the web server should be able to connect to the database server. Since both for the EC2 instance and for RDS the security groups is used to allow the traffic, you should discard the options that have the option of ACL since it is used at the subnet level and not at the instance level. Therefore, C and D are incorrect answers for the first scenario. In addition, the web server should be able to connect to the database servers, which means the port of the database server should be open to the web server. Therefore, B is incorrect since specifying from which port the traffic would come and ot the port of the database that needs to be open.

*You work for a company that has a set of EC2 Instances. There is an internal requirement to create another instance in another availability zone. One of the EBS volumes from the current instance needs to be moved from one of the older instances to the new instance. How can you achieve this?* * Detach the volume and attach to an EC2 instance in another AZ. * Create a new volume in the other AZ and specify the current volumes as the source. * Create a snapshot of the volume and then create a volume from the snapshot in the other AZ. * Create a new volume in the AZ and do a disk copy of contents from one volume to another.

*Create a snapshot of the volume and then create a volume from the snapshot in the other AZ.* In order for a volume to be available in another availability zone, you need to first create a snapshot from the volume. Then in the snapshot from creating a volume from the snapshot, you can specify the new availability zone accordingly. ----------------------------------- Option A is invalid, because the Instance and Volume have to be in the same AZ in order for it to be attached to the instance. Option B is invalid, because there is no way to specify a volume as source. Option D is invalid, because the Diskcopy would just be a tedious process.

*You are running your EC2 instance on the us-west 1 AZ. You are using an EBS volume along with the EC2 server to store the data. You need to move this instance to us-west-2. What is the best way to move all the data?* * Unmount or detach the EBS volume from us-west-1 and mount it to the new EC2 instance in us-west-2. * Create a snapshot of the volume in us-west-1 and create a volume from the snapshot in us-west-2 and then mount it to the EC2 server. * Copy all the data from EBS running on us-west-1 to S3 and then create a new EBS volume in us-west-2 and restore the data from S3 into it. * Create a new volume in us-west-2 and copy everything from us-west-1 to us-west-2 using the disk copy.

*Create a snapshot of the volume in us-west-1 and create a volume from the snapshot in us-west-2 and then mount it to the EC2 server.* The EBS volumes are AZ specific. Therefore, you need to create a snapshot of the volume and then use the snapshot to create a volume in a different AZ. ------------------------- The EBS volume can't be mounted across different AZs. Taking a snapshot is going to copy all the data, so you don't have to copy all the data manually or via disk copy.

*A company is planning on hosting an application in AWS. The application will consist of a web latey and database layer. Both will be hosted in a default VPC. The web server is created in a public subnet and the MySQL database in a private subnet. All subnets are created with the default ACL setting. Following are the key requirements: a) The web server must be accessible only to customers on an SSL connection. b) The database should only be accessible to web servers in a public subnet. Which solution meets these requirements without impacting other running applications?* (Choose 2) * Create a network ACL on the web server's subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0. * Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group. * Create a network ACL on the DB subnet, allow MYSQL port 3306 inbound for web servers and deny all outbound traffic.

*Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.* * Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group.* 1) Option B: To ensure that secure traffic can flow into your web server from anywhere, you need to allow inbound security at 443. 2) Option C: This is to ensure that traffic can flow from the database server to the web server via the database security group.

*Your company is planning on using Route 53 as the DNS provider. There is a need to ensure that the company's domain name points to an existing CloudFront distribution. How can this be achieved?* * Create an Alias record which points to the CloudFront distribution. * Create a host record which points to the CloudFront distribution. * Create a CNAME record which points to the CloudFront distribution. * Create a Non-Alias Record which points to the CloudFront distribution.

*Create an Alias record which points to the CloudFront distribution.* While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to the DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to a CloudFront distribution, and Elastic Beanstalk environment, an ELB Classic, Application, or Network Load Balancer, and Amazon S3 bucket that is configured as a static website, or another Route 53 record in the same hosted zone. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value. *Note:* Route 53 uses 'Alias Name' to connect to the CloudFront, reason Alias Record is a Route 53 extension to DNS. Also, alias record is similar to CNAME record, but the main difference is - you can crate alias record for both root domain & sub-domain, whereas CNAME record can be creatd only to the sub-domain.

*Your company has a set of applications that make use of Docker containers used by the Development team. There is a need to move these containers to AWS. Which of the following methods could be used to setup these Docker containers in a separate environment in AWS?* * Create EC2 Instances, install Docker and then upload the containers. * Create EC2 Container registries, install Docker and then upload the containers. * Create an Elastic Beanstalk environment with the necessary Docker containers. * Create EBS Optimized EC2 Instances, install Docker and then upload the containers.

*Create an Elastic Beanstalk environment with the necessary Docker containers.* Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can chose your own platform, programming language, and any application dependencies (such as a package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. ----------------------------------- Option A could be partially correct as we need to install docker on EC2 instance. In addition to this, you need to create an ECS Task definition which details the docker image that we need to use for containers and how many containers to be used as well as the resource allocation for each container. But with Option C, we have the added advantage that, if a Docker container running in an Elastic Beanstalk environment crashes or is skilled for any reason, Elastic Beanstalk restarts it automatically. In the question we have been asked about the best method to setup docker containers, hench Option C seems to be more appropriate.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should Route 53 be configured to ensure the custom domain is made to point to the load balancer?* (Choose 2) * Create an A record pointing to the IP address of the load balancer. * Create a CNAME record pointing to the load balancer DNS name. * Create an alias record for a CNAME record to the load balancer DNS name. * Ensure that a hosted zone is in place.

*Create an alias record for a CNAME record to the load balancer DNS name.* *Ensure that a hosted zone is in place.* While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to an AWS resource such as a CloudFront distribution or an Amazon S3 bucket. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value. - An alternate domain name for a CloudFront distribution - Route 53 responds as if the query had asked for the CloudFront distribution by using the CloudFront domain name, such as d111111abcdef8.cloudfront.net - An Elastic Beanstalk environment - Route 53 responds to each query with one or more IP addresses for the environment. - An ELB load balancer - Route 53 responds to each query with one or more IP addresses for the load balancer. - An Amazon S3 bucket that is configured as a static website - Route 53 responds to each query with one IP address for the Amazon S3 bucket. Option D is correct. Hosted Zone - is a container for records, and records contain information about how you want to route traffic for a specific domain, such as an example.com, and its subdomains (vpc.example.com, elb.example.com). A hosted zone and the corresponding domain have the same name, and we have 2 types of hosted zones; Public Hosted Zone - contain records that specify how you want to route traffic on the internet. Private Hosted Zone - contain records that specify how you want to route traffic in an Amazon VPC. ----------------------------------- Option A and B are incorrect since you need to use ALIAS names for this.

*You are building an internal training system to train your employees. You are planning to host all the training videos on S3. You are also planning to use CloudFront to serve the videos to employees. Using CloudFront, how do you make sure that videos can be accessed only via CloudFront and not publicly directly from S3?* * Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI. * Create an IAM user for CloudFront and grant access to the objects in your S3 bucket to that IAM user. * Create a security group for CloudFront and provide that security group to S3. * Encrypt the object in S3.

*Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI.* By creating an origin access identity, you can access the objects in your S3 via CloudFront. ------------------------- Creating an IAM user won't guarantee that the object is accessed only via CloudFront. You can use a security group for EC2 and ELB along with CloudFront but not for S3. Encryption does not change the way an object can be accessed.

*What are the different ways of making an EC2 server available to the public?* * Create it inside a public subnet * Create it inside a private subnet and assign a NAT device * Attach an IPv6 IP address * Allocate that with a load balancer and expose the load balancer to the public

*Create it inside a public subnet* If you create an EC2 instance in the public subnet, it is available from the Internet. Creating an instance inside a private subnet and attaching a NAT instance won't give access from the Internet. Attaching an IPv6 address can provide Internet accessibility provided it is a public IPv6 and not private. Giving load balance access to the public won't give the EC2 access to the public.

*You are creating a number of EBS Volumes for the EC2 Instances hosted in your company's AWS account. The company has asked you to ensure that the EBS volumes are available even in the event of a disaster. How would you accomplish this?* * Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway. * Create snapshots of the EBS Volumes. * Ensure the snapshots are made available in another availability zone. * Ensure the snapshots are made available in another region.

*Create snapshots of the EBS Volumes.* *Ensure the snapshots are made available in another region.* You can back up the data on Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are *incremental backups*, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and save on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume. ----------------------------------- Option A is incorrect since you have to make use of EBS snapshots. Option C is incorrect since the snapshots need to be made available in another region for disaster recovery purposes.

*You are running your MySQL database in RDS. The database is critical for you, and you can't afford to lose any data in the case of any kind of failure. What kind of architecture will you go with for RDS?* * Create the RDS across multiple regions using a cross-regional read replica * Create the RDS across multiple AZs in master standby mode * Create the RDS and create multiple read replicas in multiple AZs with the same region * Create a multimaster RDS database across multiple AZs

*Create the RDS across multiple AZs in master standby mode* If you use a cross-regional replica and a read replica within the same region, the data replication happens asynchronously, so there is a chance of data loss. Multimaster is not supported in RDS. By creating the master and standby architecture, the data replication happens synchronously, so there is zero data loss.

*What are the various ways of securing a database running in RDS?* (Choose two.) * Create the database in a private subnet * Encrypt the entire database * Create the database in multiple AZs * Change the IP address of the database every week

*Create the database in a private subnet. Encrypt the entire database* Creating the database in multiple AZs is going to provide high availability and has nothing to do with security. Changing the IP address every week will be a painful activity and still won't secure the database if you don't encrypt it.

*You are deploying a three-tier application and want to make sure the application is secured at all layers. What should you be doing to make sure it is taken care of?* * Create the web tier in a public subnet, and create the application and database tiers in the private subnet. Use HTTPS for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier and application tier in the public subnet, and create the database tier in the private subnet. Use HTTPS for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier in the public subnet, and create the application and database tiers in the private subnet. Use HTTP for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier in the public subnet, and create the application and database tiers in a private subnet. Use HTTP for all the communication to the web tier. There is no need to encrypt the data since it is already running in AWS.

*Create the web tier in a public subnet, and create the application and database tiers in the private subnet.* In this scenario, you need to put the database and application servers in the private subnet and only the web tier in the public subnet. HTTPS will make sure the data comes to the web server securely. Encryption of the data at rest and in transit will make sure that you have end-to-end security in the overall system. ------------------------- B is incorrect because you are putting the application server in the public subnet, and C is incorrect because you are using HTTP instead of HTTPS. D is incorrect because you are not encrypting the data.

*A team is planning to host data on the AWS Cloud. Following are the key requirements; a) Ability to store JSON documents b) High availability and durability Select the ideal storage mechanism that should be employed to fit this requirement.* * Amazon EFS * Amazon Redshift * DynamoDB * AWS CloudFormation

*Dyanmo DB* Amazon DynamoDBi s a fully managed NoSQL database service that provides fast and predicatble performance with seamless scalability. The data in DyanmoDB is stored in JSON format and hence is the perfect data store for the requirement in question. DynamoDBMapper has a new feature that allows you to save an object as JSON document in a DynamoDB attribute. The mapper does the heave work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the Java object from the JSON document when requested by the user.

*You company currently has data hosted in an Amazon Aurora MySQL DB. Since this data is critical, there is a need to ensure that it can be made available to another region in case of a disaster. How can this be achieved?* * Make a copy of the underlying EBS Volumes in the Amazon Cluster in another region. * Enable Multi-AZ for the Aurora database. * Creating a read replica of Amazon Aurora in another region. * Create an EBS Snapshot of the underlying EBS Volumes in the Amazon Cluster and then copy them to another region.

*Creating a read replica of Amazon Aurora in another region.* Read replicas in Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments. You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into the region that is closer to your users, and make it easier to migrate from one region to another. *Note:* In the question, they clearly mentioned that *there is a need to ensure that it can be made available in another region in case of a disaster.* Sp Multi-AZ is not the regional-wide solution here.

*Your recent security review revealed a large spike in attempted logins to your AWS account with respect to sensitive data stored in encryption enabled S3. The data has not been encrypted and is susceptible to fraud if it were to be stolen. You've recommended AWS Key Management Service as a solution. Which of the following is true regarding how KMS operats?* * Only KMS generated keys can be used to encrypt or decrypt data. * Data is encrypted at rest. * KMS allows all users and roles use of the keys by default. * Data is decrypted in transit.

*Data is encrypted at rest.* Data is encrypted at rest; meaning data is encrypted once uploaded to S3. Encryption while in transit is handled by SSL or by using client-side encryption. ----------------------------------- Option A is incorrect, Data can be encrypted/decrypted using AWS keys or keys provided by your company. Option C is incorrect, Users are granted permissions explicitly, not by default by KMS. Option D is incorrect, Data is not decrypted in transit (while moving to and from S3). Data is encrypted or decrypted while in S3 and then while in transit can be encrypted using SSL.

*You have both production and development based instances running on your VPC. It is required to ensure that the people responsible for the development instances do not have access to work on production instances for better security. Which of the following would be the best way to accomplish this using policies?* * Launch the development and production instances in separate VPCs and use VPC Peering. * Create an IAM Policy with a condition that allows access to only instances which are used for production or development. * Launch the development and production instances in different Availability Zones and use the Multi-Factor Authentication. * Define the tags on the Development and production servers and add a condition to the IAM Policy which allows access to specific tags.

*Define the tags on the Development and production servers and add a condition to the IAM Policy which allows access to specific tags.* You can easily add tags to define which instances are production and which one are development instances. These tags can then be used while controlling access via an IAM Policy. *Note:* It can be done with the help of option B as well. However, the question is looking for the 'best way to accomplish these using policies'. By using the option D, you can reduce usage of different IAM policies on each instance.

*You are working as an AWS Architect for a start-up company. It has production website on AWS which is a two-tier with web servers in front end & database servers in back end. Third party firm has been looking after operations of these database servers. They need to access these database servers in private subnets on SSH port. As per standard operating procedure provided by Security team, all access to these servers should be over secure layer & should be logged. What will be the best solution to meet this requirement?* * Deploy Bastion hosts in Private Subnet * Deploy NAT Instance in Private Subnet * Deploy NAT Instance in Public Subnet * Deploy Bastion hosts in Public Subnet.

*Deploy Bastion hosts in Public Subnet* External user will be unable to access instance in private subnets directly. To provide such access, we need to deploy Bastion hosts in public subnets. In the case of above requirement, third-party users will initiate a connection to the Bastion hosts in public subnets & from there, they will access SSH connection to the database servers in private subnets. ----------------------------------- Option A is incorrect as Bastion hosts needs to be in Public subnets and not in Private subnets; as third-party users will be accessing these servers from the internet. Option B is incorrect as NAT instance are used to provide internet traffic to hosts in private subnets. Users from the internet will not be will not be able to do SSH connections to hosts in private subnets using NAT instance. NAT instance are always in Public subnets. Option C is incorrect as NAT instance are used to provide internet traffic to hosts in private subnets. Users from internet will not be able to do SSH connections to hosts in private subnets using NAT instance.

*A global media firm is using AWS CodePipeline as an automation service for releasing new features to customers. All the codes are uploaded in Amazon S3 bucket. Changes in files stored in Amazon S3 bucket should trigger AWS Codepipeline which will further initiate AWS Elastic Beanstalk for deploying additional resources. Which of the following is additional requirement which needs to be configured to trigger CodePipeline in a faster way?* * Enable periodic checks & create a Webhook which triggers pipeline once S3 bucket is updated. * Disable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail. * Enable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail trail. * Disable periodic checks & create a Webhook which triggers pipeline once S3 bucket is updated.

*Disable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail trail.* To automatically trigger pipeline with changes in source S3 bucket. Amazon CloudWatch Events rule & AWS CloudTrail trail must be applied. When there is a change in S3 bucket, events are filtered using AWS CloudTrail & then Amazon CloudWatch events are used to trigger start of pipeline. This default method is faster & periodic checks should be disabled to have events-based triggering of CodePipeline. ----------------------------------- Option A is incorrect as Webhooks are used to trigger piepline when source is GitHub repository. Also with periodic check will be a slower process to trigger CodePipeline. Option C is incorrect as Periodic checks are not a faster way to trigger CodePipeline. Option D is incorrect as Webhooks are used to trigger pipeline when source is GitHub repository.

*A company is developing a web application to be hosted in AWS. This application needs a data store for session data. As an AWS Solution Architect, which of the following would you recommend as an ideal option to store session data?* (Choose 2) * CloudWatch * DynamoDB * Elastic Load Balancing * ElastiCache * Storage Gateway

*DynamoDB* *ElastiCache* Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of the throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. ----------------------------------- AWS CloudWatch offers cloud monitoring services for customers of AWS resources. AWS Storage Gateway is a hybrid storage service that enables your on-premise applications to seamlessly use AWS cloud storage. AWS Elastic Load Balancing automatically distributes incoming application traffic across multiple targets.

*You are deploying an order management application in AWS, and for that you want to leverage Auto Scaling along with EC2 servers. You don't want to store the user session information on the EC2 servers because if even one use is connected to the application, you won't be able to scale down. What services can you use to store the session information?* (Choose 2.) * S3 * Lambda * DynamoDB * ElastiCache

*DynamoDB* *ElastiCache* You can store the session information in either DynamoDB or ElastiCache. DynamoDB is NoSQL database that provide single-digit millisecond latency, and using ElastiCache you can deploy, operate, and scale popular open source in-memory data stores. Both are ideal candidates for storing user session information. ----------------------------------- S3 is an object store, and Lambda lets you run code without provisioning or managing server.

*An application requires an EC2 Instance for continuous batch processing activities requiring a maximum data throughput of 500MiB/s. Which of the following is the best storage option for this?* * EBS IOPS * EBS SSD * EBS Throughput Optimized HDD * EBS Cold Storage

*EBS Throughput Optimized HDD*

*Which of the following statements are true?* * EBS Volumes cannot be attached to an EC2 instance in another AZ. * EBS Volumes can be attached to an EC2 instance in another AZ. * EBS Volumes can be attached to multiple instance simultaneously. * EBS Volumes are ephemeral.

*EBS Volumes are ephemeral.* The only true statement is, 'EBS Volumes cannot attached to an EC2 instance in another AZ.' The rest are false.

*Which of the following statements are true for an EBS volume?* (Choose two.) * EBS replicates within its availability zones to protect your applications from component failure. * EBS replicates across different availability zones to proect your applications from component failures. * EBS replicates across different regions to proect your applications from component failure. * Amazon EBS volumes provide 99.999 percent availability.

*EBS replicates within its availability zones to protect your applications from component failure.* *Amazon EBS volumes provide 99.999 percent availability. EBS replicate within their availability zones to protect your applications from component failure. It provides 99.999 percent availability. ------------------------- EBS cannot replicate to a different AZ or region.

*You are planning to run a database on an EC2 instance. You know that the database is pretty heavy on I/O. The DBA told you that you would need a minimum of 8,000 IOPS. What is the storage option you should choose?* * EBS volume with magnetic hard drive * Store all the data files in the ephemeral storage of the server * EBS volume with provisioned IOPS * EBS volume with general-purpose SSD

*EBS volume with provisioned IOPS* The magnetic hard drive won't give you the IOPS number you are looking for. You should not put the data files in the ephemeral drives because as soon as the server goes down, you will lose all the data. For a database, data is the most critical component, and you can't afford to lose that. The provisioned IOPS will give you the desired IOPS that your database needs. You can also run the database with general-purpose SSD, but there is no guarantee that you will always get the 8,000 IOPS number that you are looking for. Only PIOPS will provide you with that capacity.

*Which of the following statements are true about containers on AWS?* (Choose 5) * ECR can be used to store Docker images. * You must use ECS to manage running Docker containers in AWS. * To use private images in ECS, you must refer to Docker images from ECR. * You can use the ECS Agent without needing to use ECS. * To be able to use ECS, you must use the ECS Agent. * You can install and manage Kubernetes on AWS, yourself. * ECS allows you to control the scheduling and placement of your containers and tasks. * You can have AWS manage Kubernetes for you.

*ECR can be used to store Docker images.* *To be able to use ECS, you must use the ECS Agent.* *You can install and manage Kubernetes on AWS, yourself.* *ECS allows you to control the scheduling and placement of your containers and tasks.* *You can have AWS manage Kubernetes for you.* You definitely can install Kubernetes, yourself. ECR is the EC2 Container Registry. ECS doesn't work without the ECS agent. You can customize a lot of things in ECS

*Which of the following is a true statement?* (Choose two.) * ELB can distribute traffic across multiple regions. * ELB can distribute across multiple AZs but not across multiple regions. * ELB can distribute across multiple AZs. * ELB can distribute traffic across multiple regions but not across multiple AZs.

*ELB can distribute across multiple AZs but not across multiple regions.* *ELB can distribute across multiple AZs.* ELB can span multiple AZs within a region. It cannot span multiple regions.

*You are planning to deploy a web application via Nginx. You don't want the overhead of managing the infrastructure beneath. What AWS service should you choose for this?* * EC2 Server * RDS * Elastic Beanstalk * AWS OpsWork

*Elastic Beanstalk* Elastic Beanstalk is easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. ----------------------------------- If you use an EC2 server, you have to manage the server yourself. RDS is the relational database offering of Amazon. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*Your company uses KMS to fully manage the master keys and perform encryption and decryption operations on your data and in your applications. At an additional level of security, you now recommend AWS rotate your keys. What is your company's reponsibility after enabling this additional feature?* * Enable AWS KMS to rotate keys and KMS will manage all encrypt/decrypt actions using the appropriate keys. * Your company must instruct KMS to re-encrypt all data in all services each time a new key is created. * You have 30 days to delete old keys after a new one is rotated in. * Your company must create its own keys and import to them to KMS to enable key rotation.

*Enable AWS KMS to rotate keys and KMS will manage all encrypt/decrypt actions using the appropriate keys.* KMS will rotate keys anually and use the appropriate keys to perform cryptographic oprations. ----------------------------------- Option B is incorrect, this is not necessary as KMS is a managed service, and will keep old keys and perform operatiosn based on the appropriate key. Option C & D is incorrect, and not a requirement of KMS.

*A retailer exports data daily from its transactional databases into an S3 bucket in the Sydney region. The retailer's Data Warehousing team wants to import this data into an existing Amazon Redshift cluster in their VPC at Sydney. Corporate security policy mandates that data can only be transported within a VPC. What combination of the following steps will satisfy the security policy?* (Choose 2) * Enable Amazon Redshift Enhanced VPC Routing. * Create a Cluster Security Group to allow the Amazon Redshift cluster to access Amazon S3. * Create a NAT gateway in a public subnet to allow the Amazon Redshift cluster to access Amazon S3. * Create and configure an Amazon S3 VPC endpoint.

*Enable Amazon Redshift Enhanced VPC Routing.* *Create and configure an Amazon S3 VPC endpoint.* Amazon Redshift Enhanced VPC Routing provides VPC resources the access to Redshift. Redshift will not be able to access the S3 VPC endpoints without enabling Enhanced VPC routing, so one option is not going to support the scenario if another is not selected. NAT instance (the proposed answer) cannot be reached by Redshift without enabling Enhanced VPC Routing. Option D VPC Endpoints - it enables you to privately connect your VPC to supported AWS Services and VPC Endpoint services powered by PrivateLink without requiring an IGW. S3 VPC Endpoint - is a feature that will allow you to make even better use of VPC and S3.

*A company currently hosts a Redshift cluster in AWS. For security reasons, it should be ensured that all traffic from and to the Redshift cluster does not go through the Internet. Which of the following features can be used to fulfill this requirement in an efficient manner?* * Enable Amazon Redshift Enhanced VPC Routing. * Create a NAT Gateway to route the traffic. * Create a NAT Instance to route the traffic. * Create the VPN Connection to ensure traffic does not flow through the Internet.

*Enable Amazon Redshift Enhanced VPC Routing.* When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffice between your cluster and your data repositiories through your Amazon VPC. If Enhanced VPC Routing is not enabled, Amazon Redhisft routes traffic through the Internet, including traffic to other services within the AWS network.

*Your company currently has a web distribution hosted using the AWS CloudFront service. The IT Security department has confirmed that the application using this web distribution now falls under the scope of the PCI compliance. What are the possible ways to meet the requirement?* (Choose 2) * Enable CloudFront access logs. * Enable Cache in CloudFront. * Capture requests that are sent to the CloudFront API. * Enable VPC Flow Logs

*Enable CloudFront access logs.* *Capture requests that are sent to the CloudFront API* If you run PCI or HIPAA-compliant workloads based on the AWS Shared Responsibility Model, we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. ----------------------------------- Option B helps to reduce latency. Option D is incorrect, VPC flow logs caputre information about the IP traffic going to and from network interfaces in the VPC but not for CloudFront.

*A redshift cluster currently contains 60 TB of data. There is a requirement that a disaster recovery site is put in place in a region located 600km away. Which of the following solutions would help ensure that this requirement is fulfilled?* * Take a copy of the underlying EBS volumes to S3, and then do Cross-Region Replication. *Enable Cross-Region snapshots for the Redshift Cluster. *Create a CloudFormation template to restore the Cluster in another region. * Enable Cross Availability Zone snapshots for the Redshift Cluster.

*Enable Cross-Region snapshots for the Redshift Cluster.* You can configure cross-regional snapshots when you want Amazon Redshift to automatically copy snapshots (automated or manual) to another region for backup purposes.

*Your company has a set of resources defined in AWS. These resources consists of applications hosted on EC2 Instances. Data is stored on EBS volumes and S3. They company mandates that all data should be encrypted at rest. How can you achieve this?* (Choose 2) * Enable SSL with the underlying EBS volumes * Enable EBS Encryption * Make sure that data is transmitted from S3 via HTTPS * Enable S3 server-side encryption

*Enable EBS Encryption* *Enable S3 server-side Encryption* Amazon EBS encryption offers a simple encryption solution for your EBS volumes without the need to build, maintain, and secure your own key management infrastructure. Server-side encryption protects data at rest. Server-side encryption with Amazon S3-managed encryption key (SSE-S3) uses strong multi-factor encryption. ----------------------------------- Option A and C are incorrect since these have to do with encryption of data in transit and not encryption of data at rest.

*The local route table in the VPC allows which of the following?* * So that all the instances running in different subnet within a VPC can communicate to each other * So that only the traffic to the Internet can be routed * So that multiple VPCs can talk with each other * So that an instance can use the local route and talk to the Internet

*So that all the instances running in different subnet within a VPC can communicate to each other* The traffic to the Internet is routed via the Internet gateway. Multiple VPCs can talk to each other via VPC peering.

*A company currently storing a set of documents in the AWS Simple Storage Service, is worried about the potential loss if these documents are ever deleted. Which of the following can be used to ensure protection from loss of the underlying documents in S3?* * Enable Versioning for the underling S3 bucket. * Copy the bucket data to an EBS Volume as a backup. * Create a Snapshot of the S3 bucket. * Enable an IAM Policy which does not allow deletion of any document from the S3 bucket.

*Enable Versioning for the underlying S3 bucket.* *Enable an IAM Policy which does not allow deletion of any document from the S3 bucket.* Versioning is on the bucket level and can be used to recover prior versions of an object. We can also avoid 'deletion' of objects from S3 bucket by writing IAm policy.

*You want to launch a copy of a Redshift cluster to a different region. What is the easiest way to do this?* * Create a cluster manually in a different region and load all the data * Extend the existing cluster to a different region * Use third-party software like Golden Gate to replicate the data * Enable a cross-region snapshot and restore the database from the snapshot to a different region

*Enable a cross-region snapshot and restore the database from the snapshot to a different region.* Loading the data manually will be too much work. You can't extend the cluster to a different region. A Redshift cluster is specific to a particular AZ. It can't go beyond an AZ as of writing this book. Using Golden Gate is going to cost a lot, and there is no need for it when there is an easy solution available.

*You run a security company which stores highly sensitive PDFs on S3 with versioning enabled. To ensure MAXIMUM protection of your objects to protect against accidental deletion, what further security measure should you consider using?* * Setup a CloudWatch alarm so that if an object in S3 is deleted, an alarm will send an SNS notification to your phone. * Use server-side encryption with S3 - KMS. * Enable versioning with MFA Delete on your S3 bucket. * Configure the application to use SSL endpoints using the HTTPS protocol.

*Enable versioning with MFA Delete on your S3 bucket.* If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The web application interfaces with a AWS RDS database. The management has specified that the database needs to be available in case of a hardware on the primary database. The secondary needs to be made available in the least amount of time. Which of the following would you opt for?* * Made a snapshot of the database * Enabled Multi-AZ failover * Increased the database instance size * Create a read replica

*Enabled Multi-AZ failover.* Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. ----------------------------------- Options A and D are incorrect since even though they can be used to recover a database, it would just take more time than just enabling Multi-AZ. Option C is incorrect since this will not help the cause.

*A company has setup some EC2 Instances in a VPC with the default Security group and NACL settings. They want to ensure that IT admin staff can connect to the EC2 Instance via SSH. As an architect what would you ask to the IT admin team to do to ensure that they can connect to the EC2 Instance from Internet.* (Choose 2) * Ensure that the Instance has a Public or Elastic IP * Ensure that the Instance has a Private IP * Ensure the modify the Security groups * Ensure to modify the NACL rule

*Ensure that the Instance has a Public or Elastic IP* *Ensure to modify the Security groups* To enable access to or from the internet for instances in a VPC subnet, you must do the following. - Attach an internet gateway to your VPC - Ensure that your subnet's route table points to the internet gateway - Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address) - Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance. ----------------------------------- Option B is incorrect since the Private IP will always be created, and would not be used to connect from the internet. Option D is incorrect since the default NACL rules will allow all traffic.

*A Solutions Architect designing a solution to store and archive corporate documents, has determined Amazon Glacier as the right choice of solution. An important requirement is that the data must be delivered within 10 minutes of a retrieval request. Which feature in Amazon Glacier can help meet this requirement?* * Vault Lock * Expedited retrieval * Bulk retrieval * Standard retrieval

*Expedited retrieval* Expedited retrievals to access data in 1 - 5 minutes for a flat rate of $0.03 per GB retrieved. Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. ----------------------------------- The other two are standard (3-5 hours retrieval time) and Bulk retrievals which is the cheapest option (5-12 hours retrieval time).

*Your company is planning on hosting a set of EC2 Instances in AWS. The Instances would be divided into subnets, one for the web tier and the other for the database tier. Which of the following would be needed to ensure that traffic can flow between the Instances in each subnet.* * Ensure that the route table have the desired routing between the subnets. * Ensure that the Security Groups have the required rules defined to allow traffic. * Ensure that all instances have a public IP for communication. * Ensure that all subnets are defined as public subnets.

*Ensure that the Security Groups have the required rules defined to allow traffic.* A security group acts as a virtual firewall for your instance in a VPC. You can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. ----------------------------------- Option A is invalid since the route tables would already have the required rules to route traffic between subnets in a VPC. Option C is invalid since the instances would communicate with each other on the private IP. Option D is invalid since the database should be in the private subnet and not the public subnet.

*A customer planning on hosting an AWS RDS instance, needs to ensure that the underyling data is encrypted. How can this be achieved?* (Choose 2) * Ensure that the right instance class is chosen for the underlying instances. * Choose only the General SSD. Because only this type supports encryption of data. * Encrypt the database during creation. * Enable encryption of the underlying EBS Volume.

*Ensure that the right instances class is chosen for the underlying instance.* *Encrypt the database during creation.* Encryption for the database can be done during the creation of the database. Also, you need to ensure that the underlying instance type supports DB encryption. Encryption at Rest is not available for DB istances running SQL Server Express Edition.

*You have an application hosted on AWS consisting of EC2 Instances launched via an Auto Scaling Group. You notice that the EC2 Instances are not scaling out on demand. What checks can be done to ensure that the scaling occurs as expected?* * Ensure the right metrics are being used to trigger the scale out. * Ensure that ELB health checks are being used. * Ensure that the instances are placed across multiple Availability Zones. * Ensure that the instances are placed across multiple regions.

*Ensure that the right metrics are being used to trigger the scale out.* If your scaling events are not based on the right metrics and do not have the right threshold defined, then the scaling will not occur as you want it to happen.

*A customer has an instance hosted in the AWS Public Cloud. The VPC and subnet used to host the instance have been created with the default settings for the Network Access Control LIsts. An IT Administrator needs to be provided secure access to the underlying instance. How can this be accomplished?* * Ensure the Network Access Control List allow Inbound SSH traffic from the IT Adminstrator's Workstation. * Ensure the Network Access Control Lists allow Outbound SSH traffic from the IT Administrator's Workstation. * Ensure that the security group allows inbound SSH traffic from the IT Administrator's Workstation. * Ensure that the security group allows Outbound SSH traffic from the IT Administrator's Workstation.

*Ensure that the security group allows Inbound SSH traffic from the IT Administrator's Workstation.* Since Security groups are stateful, we do not have to configure outbound traffic. What enters the inbound traffic is allowed in the outbound traffic too. *Note:* The default network ACL is configured to allow all traffic to flow in and out of the subnets to which it is associated. Since the question does not mention that it is a custom VPC we would assume it to be the default one. Since the IT administrator need to be provided SSH access to the instance. The traffic would be inbound to the instance. Security group being stateful means that the return response to the allowed inbound request will be allowed vice-versa. Allowing the outbound traffic would mean that the instance would SSH into the IT admin's server and this server will send the response to the instance but it does not mean that IT admin would also be able to SSH into instance. SSH does not work like that. To allow SSH, you need to allow inbound SSH access over port 22.

*A company currently uses Redshift in AWS. The Redshift cluster is required to be used in a cost-effective manner. As an architecte, which of the following would you consider to ensure cost-effectiveness?* * Use Spot Instances for the underlying nodes in the cluster. * Ensure the unnecessary manual snapshots of the cluster are deleted. * Ensure VPC Enhanced Routing is enabled. * Ensure that CloudWatch metrics are disabled.

*Ensure that unnecessary manual snapshots of the cluster are deleted.* Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need. *Note:* Redshift pricing is based on the following elements. Compute Node hours Backup Storage Data Transfer - There is no data transfer charge for data transferred to or from Amazon Redshift and Amazon S3 within the Same AWS Region. For all other data transfers into and out of Amazon Redshift, you will be billed at standard AWS data transfer rates. Data scanned. ----------------------------------- There is no addittional charge for using Enhanced VPC Routing. *You might incur additional data transfer charges for certain operations, such as UNLOAD to Amazon S3 in a different region or COPY from Amazon EMR or SSH with public IP addresses.* Enhanced VPC routing does not incur any cost but any Unload operation to a different region will incur a cost. With Enhanced VPC routing or without it any data transfer to be a different region does incur the cost. But with Storage, increasing your backup retention period or taking additional snapshots increases the backup storage consumed by your data warehouse. There is no aditional chrage for backup storage up to 100% of your provisioned storage for an active data warehouse cluster. Any amount of storage exceeding this limit does incur the cost. *For Redshift spot Instances is not an option* Amazon Redshift pricing options include: On-Demand pricing: no upfront costs - you simply pay an hourly rate based on the type of number of nodes in your cluster. Amazon Redshift Spectrum pricing: enables you to trun SQL queries directly against all of your data, out to exabytes, in Amazon S3 - you simply pay for the number of bytes scanned. Reserved Instance pricing: enables you to save up to 75% over On-Demand rates by committing to using Redshift for a 1 or 3-year term.

*You are in the process of designing an archive document solution for your company. The solution must be cost-effective; therefore, you have selected Glacier. The business wants to have the ability to get a document within 15 minutes of a request. Which feature of Amazon Glacier will you choose?* * Expediated retrieval. * Standard retrieval. * Glacier is not the correct solution; you need Amazon S3. * Bulk retrieval.

*Expedited retrieval* Since you are looking for an archival and cost-effective solution, Amazon Glacier is the right choice. By using the expedited retrieval option, you should be able to get a document within five minutes, which meets your business objective. ------------------------- Standard bulk retrieval both take much longer; therefore, you won't be able to meet the business objective of five minutes. If you choose Amazon S3, the cost will go up.

*A company has a set of EC2 instances hosted on the AWS Cloud. These instances form a web server farm which services a web application accessed by users on the Internet. Which of the following would help make this architecture more fault tolerant?* (Choose 2) * Ensure the instances are placed in separate Availability Zones. * Ensure the instances are placed in separate regions. * Use an AWS Load Balancer to distribute the traffic. * Use Auto Scaling to distribute the traffic.

*Ensure the instances are placed in separate Availability Zones.* *Use an AWS Load Balancer to distribute the traffic.* A load balancer distributes incoming application traffic across multiple EC2 Instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. *Note:* Autoscaling will not create an ELB automatically, you need to manually create it in the same region as the AutoScaling group. Once you create an ELB, and attach the load balancer to the autoscaling group, it automatically registers the instances in the group and distributes incoming traffic across the instances. You can automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down. As the Auto Scaling group adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across all of your EC2 instances. *The Elastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances.* Your load balancer acts as a single point of contact for all incoming traffic to the instances in your Auto Scaling group. *To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group.*

*A VPC has been setup with a subnet and an internet gateway. The EC2 instance is setup with a public IP but you are still not able to connect to it via the Internet. The right security groups are also in place. What should you do to connect to the EC2 Instance from the Internet?* * Set an Elastic IP Address to the EC2 Instance. * Set a Secondary Private IP Address to the EC2 Instance. * Ensure the right route entry is there in the route table. * There must be some issue in the EC2 Instance. Check the system logs.

*Ensure the right route entry is there in the Route table.* You have to ensure that the Route table has an entry to the Internet Gateway because this is required for instances to communicate over the Internet. ----------------------------------- Option A is incorrect, Since you already have a public IP assigned to the instance, this should have been enough to connect to the Internet. Option B is incorrect, Private IPs cannot be accessed from the Internet. Option D is incorrect, the route table is causing the issue and not the system.

*A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database should only be accessible to the web servers in a public subnet. As an architect, which of the following would you not recommend for such an architecture?* * Create a separate web server and database server security group. * Ensure the web server security group allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Ensure the DB server security group allows MySQL port 3306 inbound and specify the source as the web server security group.

*Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.* The question is describing a scenario where it has been instructed that the database servers should only be accessible to web servers in the public subnet. You have been asked which of the following is *not* a recommended architecture based on the scenario. Option C is correct, as we are allowing all the incomign traffic from the internet to the database port which is not acceptable as per the architecture. A simliar setup is given in AWS Documentaiton. 1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443. 2) You need to then ensure that traffic can flow from the database server to the web server via the database security group. The requirement in the question states that the database servers should only be accessible to the web servers in the public subnet. ----------------------------------- In Option D, database server's security group allows inbound at port 3306 and source of the traffic as Webserver security roup that means request traffic from web server is allowed to the DB server since security groups are stateful response will allowed from DB to the webserver. Thus allowing the communication between them so the Option D is right, but wrong in terms of this question as you have to choose an incorrect / wrong option.

*Your company has a set of VPC's. There is now a requirement to establish communication across the Instances in the VPC's. Your supervisor has asked you to implement the VPC peering connection. Which of the following considerations would you keep in mind for VPC peering.* (Choose 2) * Ensuring that the VPC's don't have overlapping CIDR blocks. * Ensuring that no on-premises communication is required via transitive routing * Ensuring that the VPC's only have public subnets for communication * Ensuring that the VPC's are created in the same region

*Ensuring that the VPC's don't have overlapping CIDR blocks.* *Ensuring that no on-premises communication is required via transitive routing* You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks. ----------------------------------- Option C is incorrect since it is not necessary that VPC's only contain public subnets. Option D is incorrect since it is not necessary that VPC's are created in the same region. *Note:* AWS now supports VPC Peering across different regions.

*You are developing an application, and you have associated an EIP with the application tier, which is an EC2 instance. Since you are in the development cycle, you have to frequently stop and start the application server. What is going to happen to the EIP when you start/stop the application server?* * Every time the EC2 instance is stopped, the EIP is de-associated when you start it. * Every time the EC2 instance is stopped, the EIP is de-associated, and you must manually attach it whenever it is started again. * Even after the shutdown, the EIP remains associated with the instance, so no action is needed. * After shutting down the EC2 instance, the EIP is released from your account, and you have to re-request it before you can use it.

*Even after the shutdown, the EIP remains associated with the instance, so no action is needed.* Even if you shut down the instance, the EIP remains allocated with the instance. ----------------------------------- A, B, and D are incorrect. When you terminate an instance the EIP goes back to the pool.

*A company has decided to use Amazon Glacier to store all of their archived documents. The management has now issued an update that documents stored in Glacier need to be accessed within a time span of 20 minutes for an IT audit requirement. Which of the following would allow for documents stored in Amazon Glacier to be accessed within the required time frame after the retrieval request?* * Vault Lock * Expedited retrieval * Bulk retrieval * Standard retrieval

*Expedited retrieval* Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

*Which of the following Route 53 policies allow you to* *a) route to a second resource if the first is unhealthy* *b) route data to resources that have better performance?* * Geoproximity Routing and Geolocation Routing * Geolocation Routing and Latency-based Routing * Failover Routing and Simple Routing * Failover Routing and Latency-based Routing

*Failover Routing and Latency-based Routing* Failover ROuting and Latency-based Routing are the only two correct options, as they consider routing data based on whether the resource is unhealthy or whether one set of resources is more performant than another. Any answer containing location based routing (Geoproximity and Geolocation) cannot be correct in this case, as these types only consider where the client or resources are located before routing the data. They do not take into account whether a resource is online or slow. Simple Routing can also be discounted as it does not take into account the state of the resources.

*Which is based on temporary security tokens?* (Choose two.) * Amazon EC2 roles * Federation * Username password * Using AWS STS

*Federation* *Using AWS STS* The username password is not a temporary security token.

*For what workload should you consider Elastic Beanstalk?* (Choose two.) * For hosting a relational database * For hosting a NoSQL database * For hosting an application * For creating a website

*For hosting an application* * For creating a website* Elastic Beanstalk is used for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can also host your web site using Elastic Beanstalk. ------------------------- For hosting a database, relational and nonrelational databases are not the right use case for Elastic Beanstalk.

*You are in the process of designing a three-tier architecture for a company. The company wants all the components to be redundant, which means all the EC2 servers need to be redundant in different AZs. You are planning to put the web server in a public subnet and the application and database servers in a private subnet. The database will be hosted on EC2 servers, and you are planning to use two AZs for this. To accomplish this, what is the minimum number of subnets you need?* * Six: one for the web server, one for the application server, and one for the database server in each AZ. * Three: one for the web server, one for the application server, and one for the database server. * Two: One for the web server, one for the application server, and one for the database server in each AZ. * Four: One for the web server and another for the application and database server in each AZ.

*Four: One for the web server and another for the application and database server in each AZ.* The minimum number of subnets you need is four since you will put the web server in a public subnet in each AZ and put the application and database server in a private subnet in each AZ. There is no need to create a separate subnet for the database and application server since that is going to have manageability overhead. ------------------------- If you create three subnets, then there is no redundancy. If you create two subnets, then you have to put all the servers (web, application, and database servers) in the same subnet, which you can't do since the web server needs to go to a public subnet and the application and database server needs to go to a private subnet. Technically, you can create six subnets, but the question is asking for the minimum number of subnets.

*A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants to offsite partial backup of this data while maintaining low-latency access to their frequenly access data. Which AWS Storage Gateway configuration meets the customer requirement?* * Gateway-Cached Volumes with snapshots scheduled to Amazon S3. * Gateway-Stored Volumes with snapshots scheduled to Amazon S3. * Gateway-Virtual Tape Library with snapshots to Amazon S3. * Gateway-Virtual Tape Library with snapshots to Amazon Glacier.

*Gateway-Cached Volumes with snapshots to scheduled to Amazon S3.* Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway-cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequenly accessed data. You can create storage volumes up to 32 TiB in size and atatch to them as iSCSI devcies from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recenlty read data in your on-premisies storage gateway's cache and uplaod buffer storage. Option A is correct, as your primary data is written to S3 while retaining your frequently accessed data locally in a chace for low-latency access. *Note:* The two requirement of the question are low latency access to frequently accessed data and an offsite backup of the data. ----------------------------------- Option B is incorrect, because it is storign the primary data locally (but we have storage constraint hence this is not a viable solution) and your entire dataset is available for low-latency access while asynchronously backed up to AWS. Option C & D are incorrect, it cannot provide low latency access to frequenly accessed data.

*You need to retain all the data for seven years for compliance purposes. You have more than 300TB of data. You are storing everything in the tape drive for compliance, and over the years you have found out it is costing you a lot of money to maintain the tape infrastructure. Moreover, the restore from the tape takes more than 24 hours. You want to reduce it to 12 hours. You have heard that if you move your tape infrastructure to the cloud, it will be much cheaper, and you can meet the SLA you are looking for. What service would you choose?* * Storage Gateway with VTL * S3 * S3 Infrequent Access * Glacier

*Glacier* Glacier is going to provide the best cost benefit. Restoring from Glacier takes about five hours, so you can even meet your SLA. ------------------------- Storing the data in any other place is going to cost you more.

*What are the two different in-memory key-value engines Amazon ElastiCache currently supports?* (Choose two.) * Oracle in-memory database * SAP HANA * Memcached * Redis

*Memcached* *Redis* Amazon Elasticache currently supports Memcached and Redis. ----------------------------------- Amazon ElastiCache does not support Oracle in-memory databases or SAP HANA.

*What are the different types of virtualization available on AWS?* (Choose two.) * Cloud virtual machine (CVM) * Physical virtual machine (PVM) * Hardware virtual machine (HVM) * Paravirtual machine (PV)

*Hardware virtual machine (HVM)* *Paravirtual machine (PV)* The two different types of virtualization are hardware virtual machine (HVM) and paravirtual machine (PVM). HVM virtualization provides the ability to run an operating system directly on top of a virtual machine without any modification, as if it were running on the bare-metal hardware. Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing. ------------------------- CVM and PVM are not supported in AWS.

*An application consists of a couple of EC2 Instances. One EC2 Instance hosts a web application and the other Instance hosts the database server. Which of the following changes would be made to ensure high availability of the database layer?* * Enable Read Replicas for the database. * Enable Multi-AZ for the database. * Have another EC2 Instance in the same Availability Zone with replication configured. * Have another EC2 Instances in another Availability Zone with replication configured.

*Have another EC2 Instances in another Availability Zone with replication configured.* Since this is a self-managed database and not an AWS RDS instance, option A and B are incorrect. So here the database server hosted on EC2 instances (self-managed), when you host a database server on EC2 instance, there are no direct options available to enable the read replica and multi-AZ. To ensure high availability, have the EC2 Instance in another Availability Zone, so even if one goes down, the other one will still be available. *Note:* In the

*You are running a Cassandra database that requires access to tens of thousands of low-latency IOPS. Which of the following EC2 instance families could be best suit your needs?* * High I/O instances * Dense Storage Instances * Memory Optimized Instances * Cluster GPU Instances

*High I/O instances* High I/O instances use SSD-based local instance storage to deliver very high, low latency, I/O capacity to applications, and are optimized for applications that require tens of thousands of IOPS.

*Third-party sign-in (Federation) has been implemented in your web application to allow users who need access to AWS resources. Users have been successfully loggin in using Google, Facebook, and other third-party credentials. Suddently, their access to some AWS resources have been restricted. What is the likely cause of restricted use of AWS resources?* * IAM policies for resources were changed, thereby restricting access to AWS resources. * Federation protocols are used to authorize services and needs to be updated. * AWS changed the services allowed to be accessed via federated login. * The identity providers no longer allow access to AWS services.

*IAM policies for resources were change, thereby restricting the access to AWS resources.* When IAM policies are changed, they can impact the user experience and services they can connect to. ----------------------------------- Option B is incorrect, Federation is used to authenticate, not to authorized services. Option C is incorrect, Federation allows for authenticating users, but does not authorize services. Option D is incorrect, the identity providers don't have the capability to authorize services; they authenticate users.

*A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able to access data from another customer. Which solution should the architect use to meet the above requirements?* * IAM roles for tasks. * IAM roles for EC2 Instances. * IAM Instance profile for EC2 Instances. * Security Group rules.

*IAM roles for tasks* With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with WAS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

*Which of following databases is not supported using Amazon RDS?* * IBM DB2 * Microsoft SQL Server * PostgreSQL * Oracle

*IBM DB2* IBM DB2 is not supported in Amazon RDS. ------------------------- Microsoft SQL Server, PostgreSQL, and Oracle databases are supported in Amazon RDS.

*Which of the following databases is not supported using Amazon RDS?* * IBM DB2 * Microsoft SQL Server * PostgreSQL * Oracle

*IBM DB2* IBM DB2 is not supporteed in Amazon RDS. ------------------------- Microsoft SQL Server, PostreSQL, and Oracle databases are supported in Amazon RDS.

*You are speaking with a former colleague who asks you about cloud migrations. The place she is working for runs a large fleet of on-premise Microsoft servers and they are concerned about licensing costs. Which of the following statements is invalid?* * License Mobility allows customers to move eligible Microsoft software to third-party cloud providers such as AWS for use on EC2 instances with default tenancy. * If I bring my own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances, then - subject to Microsoft's terms - Software Assurance is required. * EC2 Bare Metal instances give the customer full control of the configuration of instances just as they have on-premise: The customer has the ability to install a hypervisor directly on the hardware and therefore define and configure their own instance configurations of RAM, disk and vCPU which can minimize additional licensing costs. * AWS License Manager includes features to help our organization manage licenses across AWS and on-premises. With AWS License Manager, you can define licensing rules, track license usage, and enforce controls on license use to reduce the risk of license overages. You can also set usage limits to control licensing costs. There is no additional charge of AWS License Manager.

*If I bring my own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances, then - subject to Microsoft's terms - Software Assurance is required.* If you are bringing your own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances then Software Assurance is not required subject to Microsoft's term.

*Individual instances are provisioned ________.* * In Regions * Globally * In Availability Zones

*In Availability Zones*

*You are working as AWS Solutions Architect for a large banking organization. The requirement is that under normal business hours, there would be always 24 web servers up and running in a region (example: US - West (Oregon). It will be a their tiered architecture connecting to the databases. The solution offered should be highly available, secure, cost-effective, and should be able to respond to the heavy requests during peak hours and tolerate up to one AZ failure.* * In a given, use ELB behind two different AZ's, with minimum or desired 24 web servers hosted in a public subnet. And Multi-AZ database architecture in a private subnet. * In a give region, use ELB behind three different AZ's, each AZ having ASG, with a minimum or desired 12 web servers hosted in a public subnet. And Multi-AZ database architecture in a private subnet. * In a given region, use ELB behind two different AZ's, each AZ having ASG, with a minimum or desired 12 web servers hosted in a public subnet. And Multi AZ database architecture in a private subnet. * In a given region, use ELB behind three different AZ's, each AZ having ASG, with minimum or desired 8 web servers hosted in public subnet. And Multi AZ database architecture in a different public subnet.

*In a given region, use ELB behind three different AZ's, each AZ have ASG, with minimum or desired 12 web servers hosted in public subnet. And Multi-AZ database architecture in a private subnet.* As the solution needs to be tolerant up to one AZ failure it means there are always 36 web servers to cater the service requests. If one AZ fails then still there will be a 24 server running all the time, and in case two AZ fails there will always be 12 servers running and ASG can be utilised to scale out the required number of servers. ----------------------------------- Option A is incorrect, as everything looks good, but the designed architecture does not look to be cost effective as all the time 48 servers will be running and it does not have ASG to cater to additional load on servers, however it is fault tolerant to one AZ failure. Besides, it's always a good practice to use multiple AZ's to make the application highly available. Option C is incorrect, as it will not be a suitable solution in case when there will be one AZ failure the other AZ will have only 12 servers running. One might think ASG is always there to take care when the second AZ fails but think of scenario when other AZ fails and at the same time traffic is at its peak, then the application will not be further scalable and users might face slow responses. Option D is incorrect, remember the design principle of keeping the databases in private subnet. As this solution mentions to place databases in other public subnet, the data can be exposed over the internet and hence it's insecure application.

*You have a data warehouse on AWS utilizing Amazon Redshift of 50 Tb. Your data warehouse is located in us-west-1 however you are opening a new office in London where you will be employing some data scientists. You will need a copy of this Redshift cluster in eu-west-2 for performance and latency considerations. What is the easiest way to manage this migration?* * Create a new redshift cluster in eu-west-2. Once provisioned use AWS data pipeline to export the data from us-east-1 to eu-west-2. * Order an AWS Snowball. Export the Redshift data to Snowball and then ship the snowball from us-east-1 to eu-west-2. Load the data into Redshift in London. * Export the data to S3 using Data Pipeline and configure Cross Region Replication to an S3 bucket based in London. Use AWS Lambda to import the data back to Redshift. * In the AWS console go in to Redshift and choose Backup, and then choose Configure Cross-Region Snapshots. Select Copy Snapshot and then choose the eu-west-2 region. Once successfully copied use the snapshot in the new region to create a new Redshift cluster from the snapshot.

*In the AWS console go in to Redshift and choose Backup, and then choose Configure Cross-Region Snapshots. Select Copy Snapshot and then choose the eu-west-2 region. Once successfully copied use the snapshot in the new region to create a new Redshift cluster from the snapshot.* Where AWS provides a service, it is wise to use it rather than trying to create a bespoke service. The AWS service will have been designed and tested to ensure robust and secure transfer taking into account key management and validation.

*As an AWS solution architect, you are building a new image processing application with queuing service. There is fleet of m4.large EC2 instances which would poll SQS as images are uploaded by users. The image processing takes around 55 seconds for completion, and users are notified via emails on completion. During the trial period, you find duplicate messages being generated due to which users are getting multiple mails for the same image. Which of the following is the best option to eliminate duplicate messages before going to production?* * Create a delay queue for 60 seconds. * Increase visibility timeout for 60 seconds. * Create delay queue to greater than 60 seconds. * Decrease visibility timeout below 60 seconds.

*Increase visibility timeout to 60 seconds.* Default visibility timeout is 30 seconds. Since application needs 60 seconds to complete the processing, visibility timeout should be increase to 60 seconds. This will hide message from other consumers for 60 seconds, so they will not process the same file which is in process by original consumer. ----------------------------------- Option A & C are incorrect as Delay queues let you postpone the delivery of new messages to a queue for a number of seconds. Creating delay queue for 60 seconds or more will delay delivery of new message by specific seconds & not eliminate duplicate message. * Option D is incorrect as visibility timeout should be set to maximum time it takes to process & delete message from the queue. If visibility timeout is set to below 60 seconds, message will be again visible to other consumers while original consumer is already working on it.

EBS Snapshots are backed up to S3 in what manner? * Exponentially * Decreasingly * EBS snapshots are not stored on S3. * Incrementally

*Incrementally*

*Your application is I/O bound, and your application needs around 36,000 IOPS. The application you are running is critical for the business. How can you make sure the application always gets all the IOPS it requests and the database is highly available?* * Install the database in EC2 using an EBS-optimized instance, and choose a I/O optimized instance class with an SSD-based hard drive * Install the database in RDS using SSD * Install the database in RDS in multi-AZ using Provisioned IOPS and select 36,000 IOPS * Install multiple copies of read replicas in RDS so all the workload gets distributed across multiple read replicas and you can cater to the I/O requirement

*Install the database in RDS in multi-AZ using Provisioned IOPS and select 36,000 IOPS* You can choose to install the database in EC2, but if you can get all the same benefits by installing the database in RDS, then why not? If you install the database in SSD, you don't know if you can meet the 36,000 IOPS requirement. A read replica is going to take care of the read-only workload. The requirement does not say the division of read and write IO between 36,000 IOPS.

*You have a legacy application that needs a file system in the database server to write application files. Where should you install the database?* * You can achieve this using RDS because RDS has a file system in the database server * Install the database on an EC2 server to get full control * Install the database in RDS, mount an EFS from the RDS server, and give the EFS mount point to the application for writing the application files * Create the database using a multi-AZ architecture in RDS

*Install the database on an EC2 server to get full control* In this example, you need access to the operating system, and RDS does not give you access to the OS. You must install the database in an EC2 server to get complete control.

*Your organization is in the process of migrating to AWS. Your company has more 10,000 employees, and it uses Microsoft Active Directory to authenticate. Creating an additional 10,000 users in AWS is going to be painful activity for you. But all the users need to use the AWS services. What is the best way of providing them with the access?* * Since all the employees have an account with Facebook, they can use Facebook to authenticate with AWS. * Tell each employee to create a separate account by using their own credit card; this way you don't have to create 10,000 users. * Write a script that can provision 10,000 users quickly. * Integrate AWS with Microsoft Active Directory.

*Integrate AWS with Microsoft Active Directory* Since AWS can be integrated with many identity providers, you should always see whether AWS can be integrated with you existing identity providers. ----------------------------------- You should never encourage employees to create a personal account for official purpose since it can become a management nightmare. You can automate the provisioning of 10,000 users, but now you have to manage the 10,000 users at both places.

*An IAM policy takes which form?* * Python script * Written in C language * JSON code * XML code

*JSON code* It is written in JSON.

*What are the languages that AWS Lambda supports?* (Choose two.) * Perl * Ruby * Java * Python

*Java* *Python* Perl and Ruby are not supported by Lambda.

*You've implemented AWS Key Management Service to protect your data in your application and other AWS services. Your global headquarters is in Northern Virgina (US East (N. Virginia)) where you created your keys and have provided the appropriate permission to designated users and specific roles within your organization. While the N. America users are not having issues, German and Japanese users are unable to get KMS to function. What is the most likely cause?* * KMS is only offered in North America. * AWS CloudTrail has not been abled to log events. * KMS master keys are region-specific and the applications are hitting the wrong API endpoints. * The master keys have been disabled.

*KMS master keys are region-specific and the applications are hitting the wrong api endpoints.* This is the most likely cause. The application should be sure to hit correct region endpoint. ----------------------------------- Option A is incorrect, KMS is offered in several regions, but keys are not transferrable out of the region they were created in. Option B is incorrect, CloudTrail is recommended for auditing but is not required. Option D is incorrect, The keys are working as expected where they were created; keys are region specific.

*Which of the following does Amazon DynamoDB support?* (Choose two.) * Graph database * Key-value database * Document database * Relational database

*Key-value database* *Document database* Amazon DynamoDB supports key-value and document structures. It is not a relational database. It does not support graph databases.

*What product should you use if you want to process a lot of streaming data?* * Kinesis Firehouse * Kinesis Data Stream * Kinesis Data Analytics * API Gateway

*Kinesis Data Stream* Kinesis Data Firehose is used mainly for large amounts of non streaming data, Kinesis Data Analytics is used for transforming data, and API Gateway is used for managing APIs.

*You are creating a data lake in AWS. In the data lake you are going to ingest the data in real time and would like to perform fraud processing. Since you are going analyze fraud, you need a response within a minute. What service should you use to ingest the data in real time?* * S3 * Kinesis Data Firehose * Kinesis Data Analytics * Kinesis Data Streams

*Kinesis Data Streams* Kinesis Data Streams allows for real-time data processing. ------------------------- S3 is used for storing data. Kinesis Data Firehose is used for loading batch data. Kinesis Data Analytics is used for transforming data during ingestion.

*What are the two in-memory key-value engines that Amazon ElastiCache supports?* (Choose two.) * Memcached * Redis * MySQL * SQL Server

*Memcached* *Redis* MySQL and SQL Server are relational databases and not in-memory engines.

*You are creating a data lake in AWS, and one of the use cases for a data lake is a batch job. Which AWS service should you be using to ingest the data for batch jobs?* * Kinesis Streams * Kinesis Analytics * Kinesis Firehose * AWS Lambda

*Kinesis Firehose* Since the use cas is a batch, Kinesis Firehose should be used for data ingestion. ----------------------------------- Kinesis Streams is used to ingest real-time data and streams of data, whereas Kinesis Analytics is used to transform the data at the time of ingestion. AWS Lambda is sued to run your code and can't be used to ingest data. However, you can write an AWS Lambda function to trigger the next step once the data ingestion is complete.

*Which product is not a good fit if you want to run a job for ten hours?* * AWS Batch * EC2 * Elastic Beanstalk * Lambda

*Lambda* Lambda is not a good fit because the maximum execution time for code in Lambda is five minutes. Using Batch you can run your code for as long as you want. Similarly, you can run your code for as long as you want on EC2 servers or by using Elastic Beanstalk.

*You want to subscribe to a topic. What protocol endpoints are available in SNS for you?. (Choose three.) * Lambda * E-mail * HTTP * Python

*Lambda* *E-mail* *HTTP* The SNS endpoints are Lambda, e-mail, and HTTP. ----------------------------------- Python is not an endpoint, it is a programming language.

*You are running a fleet of EC2 instances for a web server, and you have integrated them with Auto Scaling. Whenever a new server is added to the fleet as part of Auto Scaling, your security team wants it to have the latest OS security fixes. What is the best way of achieving this objective?* * Once Auto Scaling launches a new EC2 instance, log into it and apply the security updates. * Run a cron job on a weekly basis to schedule the security updates. * Launch the instance with a bootstrapping script that is going to install the latest update. * No action is needed. Since Auto Scaling is going to launch the new instance, it will already have all the security fixes pre-installed in it.

*Launch the instance with a bootstrapping script that is going to install latest update.* Whenever Auto Scaling creates a new instance, it picks all the configuration details from the Auto Scaling group, and therefore you don't have to do anything manually. ------------------------- If you log in to the the new EC2 instance manually and apply the security updates manually, this model may not scale well because what if your business wants to launch thousands of servers at the same time? It is possible to log in to all these servers manually and update the security fixes? Also, what if the instances are launched during the night by Auto Scaling, which logs in manually and applies the updates? Similarly, if you run a cron job, you will be scheduling the security fix for a particular time. What if the instance are launched at a different time? Bootstrapping scripts with an update action will make sure the instance has all the security fixes before it is released for use. Even Auto Scaling launches a new instance, it is not necessary that the instance will have all the security fixes in it. The instance will have the security fixes when the AMI was last updated.

*On which layer of the Open Systems Interconnection model does the application load balancer perform?* * Layer 4 * Layer 7 * Layer 3 * Layer 5

*Layer 7* An application load balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. ------------------------- A network load balancer functions at the fourth layer of the OSI model, whereas a classic load balancer operates at both layer 4 and layer 7 of the OSI model.

*You are deploying an application in multiple EC2 instances in different AZs and will be using ELB and Auto Scaling to scale up and scale down as per the demand. You are planning to store the session information in DynamoDB. Since DynamoDB has a public endpoint and you don't want to give Internet access to the application server, what is the most secure way application server can talk with DynamoDB?* * Create a NAT instance in a public subnet and reach DynamoDB via the NAT instance. * Create a NAT gateway and reach DynamoDB via the NAT gateway. * Create a NAT instance in each AZ in the public subnet and reach DynamoDB via the NAT instance. * Leverage the VPC endpoint for DynamoDB.

*Leverage the VPC endpoint for DynamoDB* Amazon DynamoDB also offers VPC endpoints using which you can secure the access to DynamoDB. The Amazon VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public Internet. ------------------------- NAT instances and NAT gateways can't be leveraged for the communication between the EC2 server and DynamoDB.

*Which of the following AWS Services were introduced at re:Invent 2016* * Lex * Dax * Molly * Polly

*Lex* *Polly* Amazon Lex is a service for building conversational interfaces using voice and text. Polly is a service that turns text into lifelike speech. AWS exams will test you on your ability to identify real vs. imaginary services. This question cadn be answered based on your knowledge of common services and requires no knowledge of re:invent announcements.

*You need to automatically migrate objects from one S3 storage class to another based on the age of the data. Which S3 service can you use to achieve this?* * Glacier * Infrequent Access * Reduced Redundancy * Lifecycle Management

*Lifecycle Management* S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set lifecycle transition policy to automatically migrate Amazon S3 objects to Standard - Infrequent Access (Standard - IA) and/or Amazon Glacier based on the age of the data.

*Your company has a set of EBS volumes and a set of adjoining EBS snapshots. They want to minimize the costs of the underlying EBS snapshots. Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?* * Maintain two snapshots: the original snapshots and the latest incremental snapshot. * Maintain a volume snapshot; subsequent snapshots will overwrite one another * Maintain a single snapshot: the latest snapshot is both incremental and complete * Maintain the most current snapshot, archive the original incremental to Amazon Glacier

*Maintain a single snapshot: the latest snapshot is both Incremental and complete* You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in time snapshots. Snapshot are incremental backup, which means that only the block on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

*You create a standard SQS queue and test it by creating a simple application that polls the queue for messages. After a message is retrieved, the application should delete it. You create three test messages in your SQS queue and discover that messages 1 and 3 are quickly deleted, but message 2 has remained in the queue. Which of the following could account for your findings?* (Choose 2) * Message 2 is invalid * Standard SQS queues cannot guarantee that messages are retrieved in first-in, first-out (FIFO) order. * The permissions on message 2 were incorrectly written. * Your application uses short-polling.

*Message 2 is invalid* *Your application uses short-polling.* With short-polling, multiple polls of the queue may be necessary to find all messages on the various nodes in the queue. The queue not being FIFO may impact the order, but not the eventual successful processing. SQS has options to control access to create messages and retrieve them. However these are not per-message controls. That just leaves the possibility that it is malformed message.

*An application currently using a NAT Instance is required to use a NAT Gateway. Which of the following can be used to accomplish this?* * Use NAT Instances along with the NAT Gateway. * Host the NAT Instance in the private subnet. * Migrate from NAT Instance to a NAT Gateway and host the NAT Gateway in the public subnet. * Convert the NAT Instance to a NAT Gateway.

*Migrate from a NAT Instance to a NAT Gateway and host the NAT Gateway in the public subnet.* One can simply start and stop using the NAT Gateway service using the deployed NAT instances. But you need to ensure that the NAT Gateway is deployed in the public subnet.

*While reviewing the Auto Scaling events for your application, you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize costs while preserving elasticity?* (Choose 2) * Modify the Auto Scaling group termination policy to terminate the older instance first. * Modify the Auto Scaling group termination policy to terminate the newest instance first. * Modify the Auto Scaling group cool down timers. * Modify the Auto Scaling group to use Scheduled Scaling actions. * Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy.

*Moify the Auto Scaling group cool down timers.* *Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy.* Here not enough time is being given for the scaling activity to take effect and for the entire infrastructure to stabilize after the scaling activity. This can be take care of by increasing the Auto Scaling group CoolDown timers. You will also have to define the right threshold for the CloudWatch alarm for triggering the scale down policy.

*Which database engine is not supported in Amazon RDS?* * Oracle * SQL Server * PostgreSQL * MongoDB

*MongoDB* MongoDB is not supported in Amazon RDS. However, you can host it using EC2 servers. ----------------------------------- Oracle, SQL Server, and PostgreSQL are supported database engines in RDS.

*You have an I/O-intensive database in your production environment that requires regular backups. You need to configure it in such a way so that when an automated backup is taken, it does not impact your production environment. Which RDS option should you choose to help you accomplish this?* * Read Replicas * Cross Region Failover * Use Redshift for your backup environment. * Multi-AZ

*Multi-AZ* With Multi-AZ RDS instances and automated backups, I/O activity is no longer suspended on your primary during your preferred backup window, since backups are taken from the standby.

*When coding a routine to upload to S3, you have the option of using either single part upload or multipart upload. Identify all the possible reasons below to use Multipart upload.* * Multipart upload delivers quick recovery from network issues. * Multipart upload delivers the ability to pause and resume object uploads. * Multipart upload delivers the ability to append data into an open data file. * Multipart upload delivers improved security in transit. * Multipart upload delivers improved throughput. * Multipart upload delivers the ability to begin an upload before you know the final object.

*Multipart upload delivers quick recovery from network issues.* *Multipart upload delivers the ability to pause and resume object uploads.* *Multipart upload delivers improved throughput.* *Multipart upload delivers the ability to being an upload before you know the final object size.* Multipart upload provides options for more robust file upload in addition to handling larger files than single part upload.

*You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?* * Remove public read access and use signed URLs with expiry dates. * Use CloudFront distributions for static content. * Block the IPs of the offending websites in Security Groups. * Store photos on an EBS volume of the web server.

*Remove public read access and use signed URLs with expiry dates.* ----------------------------------- Option B is incorrect, because CloudFront is only used for the distribution of content across edge or region locations, and not for restricting access to content. Option C is not feasible, because of their dynamic nature, blocking IPs is challenging and you will not know how which sites are accessing your main site. Option D is incorrect since storing photos on an EBS Volume is neither a good practice nor an ideal architectural approach for an AWS Solutions Architect.

A company hosts 5 web servers in AWS. They want to ensure that Route53 can be used to route user traffic to random healthy web servers when they request for the underlying web application. Which routing policy should be used to fulfill this requirement?* * Simple * Weighted * Multivalued Answer * Latency

*Multivalue Answer* If you want to route traffic approximately randomly to multiple resources such as a web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. For example, suppose you manage an HTTP web service with a dozen web servers that each have their own IP address, no one web server could handle all of the traffic, but if you create a dozen multivalue answer records, Amazon Route 53 responds to DNS queries with up to eight healthy records in response to each DNS query. Amazon Route 53 gives different answers to different DNS resolvers. If a web servers becomes unavailable after a resolver caches a response, client software can try another IP address in the response. Multivalue answer routing policy - Use when you want to Route 53 to respond DNS queries with up to eight healthy records selected at random. ----------------------------------- Simple routing policy - Use for a single resource that performs a given function for your domain, for example, a web servers content for the example.com website. Latency routing policy - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. Weighted routing policy - Use to route traffic to multiple resources in proportions that you specify.

*Your company hosts 10 web servers all serving the same web content in AWS. They want Route 53 to serve traffic to random web servers. Which routing policy will meet this requirement, and provide the best resiliency?* * Weighted Routing * Simple Routing * Multivalue Routing * Latency Routing

*Multivalue Routing* Multivalue answer routing lets you configure Amazon Route 53 to return multiple values, such as IP addresses for your web servers, in response to DNS queries. Route 53 responds to DNS queries with up to eight healthy records and gives different answers to different DNS resolvers. The choice of which to use is left to the requesting service effectively creating a form of randomization.

*Your company is planning on setting up a VPC with private and public subnets and then hosting EC2 Instances in the subnet. It has to be ensured that instances in the private subnet can download updates from the internet. Which of the following needs to be part of the architecture?* * WAF * Direct Connect * NAT Gateway * VPN

*NAT Gateway* You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. ----------------------------------- Option A is invalid since this is a web application firewall. Option B and D are invalid since this are used to connect on-premises infrastructure to AWS VPC's.

*Can you attach an EBS volume to more than one EC2 instance at the same time?* * Yes * Depends on which region. * No. * If that EC2 volume is part of an AMI

*No.*

*What are the two languages AWS Lambda supports? * C++ * Node.js * Ruby * Python

*Node.js* *Python* Lambda supports Node.js and Python. In addition, it supports Java and C#. ------------------------- Lambda does not support C++ and Ruby.

*Which load balancer is not capable of doing the health check?* * Application load balancer * Network load balancer * Classic load balancer * None of the above

*None of the above* All the load balancers are capable of doing a health check.

*You are using an EC2 instance to process a message that is retrieved from the SQS queue. While processing the message, the EC2 instance dies. What is going to happen to the message?* * The message keeps on waiting until the EC2 instance comes back online. Once the EC2 instance is back online, the processing restarts. * The message is deleted from the SQS queue. * Once the message visibility timeout expires, the message becomes available for processing by another EC2 instance. * SQS knows that the EC2 server has terminated. It re-creates another message automatically and resends the request.

*Once the message visibility timeout expires, the message becomes available for processing by another EC2 instance.* Using message visibility, you can define when the message is available for reprocessing. ------------------------- Amazon SQS doesn't automatically delete the message. Because Amazon SQS is a distributed system, there is no guarantee that the consumer actually receives the message. Thus, the consumer must delete the message from the queue after receiving and processing it. Amazon SQS sets a visibility timeout, which is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. If the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers, and the message is received again.

*Which RDS engine does not support read replicas?* * MySQL * Aurora MySQL * PostgreSQL * Oracle

*Oracle* Only RDS Oracle does not support read replicas; the rest of the engines do support it.

*You have been designing a CloudFormation template that creates one elastic load balancer fronting two EC2 instances. Which section of the template should you edit so that the DNS of the load balancer is returned upon creation of the stack?* * Resources * Parameters * Outputs * Mappings

*Parameters* ----------------------------------- Option A is incorrect because this is used to define the main resources in the template. Option B is incorrect because this is used to define parameters which can taken in during template deployment. Option D is incorrect because this used to map key value pairs in a template.

*You've been tasked with migrating an on-premise application architecture to AWS. During the Design process, you give consideration to current on-premise security and identify the security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model?* (Choose 2) * User access to the AWS environment * Physical network infrastructure * Instance security * Virtualization Infrastructure

*Physical network infrastructure* *Virtualization Infrastructure* Understanding the AWS Shared Responsibility Model will help you answer quite a few exam questions by recognizing false answers quickly.

*A company is planning on setting up a web-based application. They need to ensure that users across the world have the ability to view the pages from the web site with the least amount of latency. How can you accomplish this?* * Use Route 53 with latency-based routing * Place a cloudfront distribution in front of the web application * Place an Elastic Load balancer in front of the web application * Place an Elastic Cache in front of the web application

*Place a cloudfront distribution in front of the web application.* Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds. CloudFront is integrated with AWS - including physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda Edge to run custom code close to your viewers. ----------------------------------- Option A is incorrect since this is used for multiple sites and latency based routing between the sites. Option C is incorrect since this is used for fault tolerance for the web application. Option D is incorrect since this is used for caching requests in front of database layer.

*You have a web application hosted on an EC2 Instance in AWS which is being accessed by users across the globe. The Operations team has been receiving support requests about extreme slowness from users in some regions. What can be done to the architecture to improve the response time for these users?* * Add more EC2 Instances to support the load. * Change the Instance type to a higher instance type. * Add Route 53 health checks to improve the performance. * Place the EC2 Instance behind CloudFront.

*Place the EC2 Instance behind the CloudFront.* Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best performance. ----------------------------------- Option A is incorrect, the latency issue is experienced by people from certain parts of the world only. So, increasing the number of EC2 Instances or increasing the instance size does not make much of a difference. Option C is incorrect, Route 53 health checks are meant to see whether the instance status is healthy or not. Since the case deals with responding to request from users, we do not have to worry about this. However, for improving latency issues, CloudFront is a good solution.

*You plan on hosting a web application on AWS. You create an EC2 Instance in a public subnet which needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be taken to ensure that a secure setup is in place?* (Choose 2) * Place the EC2 Instance with the Oracle database in the same public subnet as Web server for faster communication. * Place the EC2 Instance with the Oracle database in a separate private subnet. * Create a database Security group which allows incoming traffic only from the Web server's security group. * Ensure that the database security group allows incoming traffic from 0.0.0.0/0

*Place the EC2 Instance with the Oracle database in a separate private subnet.* *Create a database Security group which allows incoming traffic only from the Web server's security group.* The best and most secure option is to place the database in a private subnet. The below diagram from AWS Documentation shows this setup. Also, you ensure that access is not allowed from all sources but only the web servers. ----------------------------------- Option A is incorrect because as per the best practice guidelines, db instances are placed in Private subnets and allowed to communicate with web servers in the public subnet. Option D is incorrect, because allowing all incoming traffic from the Internet to the db instance is a security risk.

*A company has an application that delivers objects from S3 to users. Of late, some users spread across the globe have been complaining of slow response times. Which of the following additional steps would help in building a cost-effective solution and also help ensure that the users get an optimal response to object from S3?* * Use S3 Replication to replicate the objects to regions closest to the users. * Ensure S3 Transfer Acceleration is enabled to ensure all users get the desired response times. * Place an ELB in front of S3 to distribute the load across S3. * Place the S3 bucket behind a CloudFront distribution.

*Place the S3 bucket behind a CloudFront distribution* If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon Se, which will reduce your costs. For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3 and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3. ----------------------------------- Option A and B are incorrect, S3 Cross-Region Replication and Transfer Acceleration incurs costs. Option C is incorrect, ELB is used to distribute traffic on the EC2 Instances.

*The listener within a load balancer needs two details in order to listen to incoming traffic. What are they?* (Choose two.) * Type of the operating system * Port number * Protocol * IP address

*Port number* *Protocol* Listeners define the protocol and port on which the load balancer listens for incoming connections.

*An application reads and writes objects to an S3 bucket. When the application is fully deployed, the read/write traffic is expected to be 5,000 requests per second for the addition of data and 7,000 requests per second to retrieve data. How should the architect maximize the Amazon S3 performance?* * Use as many S3 prefixes as you need to paralel to achieve the required throughput. * Use the STANDARD_IA storage class. * Prefix each object name with a hex hash key along with the current date. * Enable versioning on the S3 bucket.

*Prefix each object name with a hex hash key along with the current date.* *NOTE:* Base on the S3 new performance announcement. "S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance." But Amazon exam questions and answers not yet updated. So Option C is correct answer as per AWS exam. This recommendation for increasing performance in case of a high request rate in S3 is given in the documentation.

*You application's has a rapid upscale and usage and usage peaks at 90% during the hours of 9 AM and 10 AM everyday. All other hours require only 10% of the peak resources. What is the best way to scale your application so you're only paying for max resources during peak hours?* * Proactive cyclic scaling * Reactive cyclic scaling * Reactive event-based scaling * Proactive event-based scaling

*Proactive cyclic scaling* Proactive cyclic scaling is scaling that occurs at a fixed interval (daily, weekly, monthly, quarterly). The proactive approach can be very effective when the upscale is large and rapid and you cannot wait for the delays of a sequence of auto-scaling steps.

*A company is required to use the AWS RDS service to host a MySQL database. This database is going to be used for production purposes and is expected to experience a high number of read/write activities. Which of the below underlying EBS Volume types would be ideal for this database?* * General Purpose SSD * Provisioned IOPS SSD * Throughput Optimized HDD * Cold HDD

*Provisioned IOPS SSD* Highest performance SSD volume for mission-critical low-latency or high throughput workloads. Critical business application that requires sustained IOPS performance or more than 10,000 IOPS or 160 MiB/s of throughput per volume. Large database workloads.

*A company wants to host a selection of MongoDB instances. They are expecting a high load and wants to have a low latency as possible. As an architect, you need to ensure that the right storage is used to host the MongoDB database. Which of the following would you incorporate as the underlying storage layer?* * Provisioned IOPS * General Purpose SSD * Throughput Optimized HDD * Cold HDD

*Provisioned IOPS* High performance SSD volume designed for low latency-sensitive transactional workloads.

*You have a website that allows users in third world countries to store their important documents safely and securely online. Internet connectivity in these countries is unreliable, so you implement multipart uploads to improve the success rate of uploading files. Although this approach works well, you notice that when an object is not uploaded successfully, incomplete parts of that object are still being stored in S3 and you are still being charged for those objects. What S3 feature can you implement to delete incomplete multipart uploads?* * S3 Lifecycle Policies. * Have S3 trigger DataPipeling Auto-delete. * S2 Reduced Redundancy Storage * Have CloudWatch trigger a Lambda function that deletes the S3 data.

*S3 Lifecycle Policies* You can create a lifecycle policy that expires incomplete multipart uploads, allowing you to save on costs by limiting the time non-completed multipart uploads are stored.

*A Solution Architect is designing an online shopping application running in a VPC on EC2 Instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements?* * Public subnets for both the application tier and the database cluster. * Public subnets for the application tier, and private subnets for the database cluster. * Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster. * Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway.

*Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster.* We always need to keep the NAT gateway on public Subnet only, because it needs to communicate to the internet. AWS says that "to create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you've created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet. *Note:* Here the requirement is that *There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.* 1) *There should be no access to the database from the Internet* To achieve this step, we have to launch the database inside the private subnet. 2) *But the cluster must be able to obtain software patches from the Internet.* For this, we have to create NAT Gateway inside the *Public Subnet.*. Because thesubnet with internet gateway attached is known as Public Subnet. Through the NAT Gateway, a database inside the Private subnet can access the internet. *Option D is saying that "User private subnet for NAT gateway".* So, Option C is having these discussed Points and it's a perfect answer.

*You company has been running its core application on a fleet of r4.xlarge EC2 instances for a year. You are confident that you understand the application steady-state performance and now you have been asked to purchased Reserved Instances (RIs) for a further 2 years to cover the existing EC2 instances, with the option of moving to other Memory or Compute optimised instance families when they are introduced. You also need to have the option of moving Regions in the future. Which of the following options meet the above criteria whilst offering the greatest flexibility and maintaining the best value for money.* * Purchase a 1 year Standard Zonal RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace. * Purchase a 1 year Convertible RI for each EC2 instance, for 2 consecutive years running. * Purchase a Convertible RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace. * Purchase a Scheduled RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace.

*Purchase 1 year Convertible RI for each EC2 instance, for 2 consecutive years running.* When answering this question, it's important to exclude those options which are not relevant, first. The question states that the RI should allow for moving between instance families and this immediately rules out Standard and Scheduled RIs as only Convertible RIs can do this. Of the 2 Convertible RI options, the first can be ruled out as it suggests selling unused RI capacity on the Reserved Instance Marketplace, but this is not available for Convertible RIs and therefore that only leaves one answer as being correct.

*You require the ability to analyze a customer's clickstream data on a website so they can do a behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data?* * Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce. * Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers. * Write click event directly to Amazon Redshift and then analyze with SQL. * Publish web clicks by session to an Amazon SQS queue. Then send the events to AWS RDS for further processing.

*Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers.* Amazon Kinesis Data Streams enables to build custom application that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as a website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events.

*Your organization already had a VPC(10.10.0.0/16) setup with one public(10.10.1.0/24) and two private subnets - private subnet 1 (10.10.2.0/24) and private subnet 2 (10.10.3.0/24). The public subnet has the main route table and two private subnets have two different route tables respectively. AWS sysops team reports a problem stating the EC2 instance in private subnet 1 cannot communicate to RDS MySQL database which is on private subnet 2. What are the possible reasons?* (Choose 2) *One of the private subnet route table's local route has been changed to restrict only within the subnet IP range. * RDS security group inbound rule is incorrectly configured with 10.10.1.0/24 instead of 10.10.2.0/24 * 10.10.3.0/24 subnet's NACL is modified to deny inbound on port 3306 from subnet 10.10.2.0/24 * RDS Security group outbound does not contain a rule for ALL traffic or port 3306 for 10.10.2.0/24 IP range.

*RDS security group inbound rule is incorrectly configured with 10.10.1.0/24 instead of 10.10.2.0/24* *10.10.3.0/24 subnet's NACL is modified to deny inbound on port 3306 from subnet 10.10.2.0/24* For Option B, possible because security group is configured with public subnet IP range instead of private subnet 1 IP range and EC2 is in private subnet 1. So EC2 will not be able to communicate with RDS in private subnet 2. ----------------------------------- For Option A, for any route table, the local route cannot be edited or deleted. Every route table contains a local route for communication within the VPC over IPv4 CIDR block. If you've associated an IPv6 CIDR block with your VPC, your route tables contain a local route for the IPv6 CIDR block. You cannot modify or delete these routes. Option D is not correct because Security Group are stateful - if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allow inbound traffic are allowed to flow out, regardless of outbound rules.

*Which product is not serverless?* * Redshift * DynamoDB * S3 * AWS Lambda

*Redshift* DynamoDB, S3, and AWS Lambda all are serverless.

*Your company is running a photo sharing website. Currently all the photos are stored in S3. At some point the company finds out that other sites have been linking to the photos on your site, causing loss to your business. You need to implement a solution for the company to mitigate this issue. Which of the following would you look at implementing?* * Remove public read access and use signed URLs with expiry dates. * Use Cloud Front distributions for static content. * Block the IPs of the offending websites in Security Groups. * Store photos on an EBS volume of the web server.

*Remove public read access and use signed URLs with expiry dates.* A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permissions to upload that object. ----------------------------------- Option B is incorrect since Cloud front is only used for distribution of content across edge of region locations. It is not used for restricting access to content. Option C is incorrect since Blocking IP's is challenging because they are dynamic in nature and you will not know which sites are accessing your main site. Option D is incorrect since Storing photos on EBS volume is not a good practice or architecture approach for an AWS Solution Architect.

*You need to know both the private IP address and public IP address of your EC2 instance. You should ________.* * Run IPCONCIG (Windows) or IFCONFIG (Llinux). * Retrieve the instance User Data from http://169.254.169.254/latest/meta-data/. * Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/. * Use the following command: AWS EC2 DisplayIP.

*Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/* Instance Metadata and User Data can be retrieved from within the instance via a special URL. Similar information can be extracted by using the API via the CLI or an SDK.

*You work for a genetics company that has extremely large datasets stored in S3. You need to minimize storage costs without introducing unnecessary risk or delay. Mandated restore times depend on the age of the data. Data 30-59 days old must be available immediately without delay, and data more than 60 days old must be available within 12 hours. Which of the following options below should you consider?* * S3 - IA * CloudFront * S3 - RRS * S3 - OneZone-IA * Glacier

*S3 - IA* *Glacier* You should use S3 - IA for the data that needs to be accessed immediately, and you should use Glacier for the data that must be recovered within 12 hours. ----------------------------------- RRS and 1Zone-IA would not be suitable solution for irreplaceable data or data that required immediate access (each introduces reduced Durability or Availability), and CloudFront is a CDN service, not a storage solution.

*A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?* * SQS guarantees the order of the messages. * SQS synchronously provides transcoding output. * SQS checks the health of the worker instances. * SQS helps to facilitate horizontal scaling of encoding tasks.

*SQS helps to facilitate horizontal scaling of encoding tasks.* Even though SQS guarantees the order of messages for FIFO queues, the main reason for using it is because it helps to horizontal scaling of AWS resources and is used for decoupling systems. ----------------------------------- SQS can neither be used for transcoding output nor for checking the health of workers instances. The health of worker instances can be checked via ELB or CloudWatch.

*A company has a workflow that sends video files from their on-premises system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. As an architect you need to design how the SQS service would be used in this architecture. Which of the following is the ideal way in which the SQS service would be used?* * SQS Should be used to guarantee to the order of the messages. * SQS should be used to synchronously managed the transcoding output. * SQS should be used to check the health of the worker instances. * SQS should be used to facilitate horizontal scaling for encoding tasks.

*SQS should be used to facilitate horizontal scaling of encoding tasks.* Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. ----------------------------------- Option A is incorrect since there is no mention in the question for the order of the messages to be guaranteed. Option B and C is incorrect since these are not the responsibility of the SQS queue.

*Your company has just acquired a new company, and the number of users who are going to use the database will double. The database is running on Aurora. What things can you do to handle the additional users?* (Choose two.) * Scale up the database vertically by choosing a bigger box * Use a combination of Aurora and EC2 to host the database * Create a few read replicas to handle the additional read-only traffic * Create the Aurora instance across multiple regions with a multimaster mode

*Scale up the database vertically by choosing a bigger box* *Create a few read replicas to handle the additional read-only traffic* You can't host Aurora on a EC2 server. Multimaster is not supported in Aurora.

*A client is concerned that someone other than approved administrators is trying to gain access to the Linux Web app instances in their VPC. She asks what sort of network access logging can be added. Which of the following might you recommend?* (Choose 2) * Set up a traffic logging rule on the VPC firewall appliance and direct the log to CloudWatch or S3. * Use Event Log filters to trigger alerts that are forwarded to CloudWatch. * Set up a Flow log for the group of instances and forward them to CloudWatch. * Make use of an OS level logging tools such as iptables and log events to CloudWatch or S3.

*Set up a Flow Log for the group of instances and forward them to CloudWatch.* *Make use of an OS level logging tools such as iptables and log events to CloudWatch or S3* Security and Auditing in AWS needs to be considered during the Design phase.

*A database hosted using AWS RDS service is getting a lot of database queries and has now become bottleneck for the associating application. What will ensure that the database is not a performance bottleneck?* * Setup a CloudFront distribution in front of the database. * Setup an ELB in front of the database. * Setup ElastiCache in front of the database. * Setup SNS in front of the database.

*Setup ElastiCache in front of the database.* ElastiCache is an in-memory solution which can be used in front of the database to cache the common queries issued against the database. This can reduce the overall load on the database. ----------------------------------- Option A is incorrect because this is normally used for content distribution. Option B is partially correct, but you need to have one more database as an internal load balancing solution. Option D is incorrect because SNS is a simple notification service.

*There is a requirement to host a database server. This server should not be able to connect to the Internet except while downloading required database patches. Which of the following solutions would best satisfy all the above requirements?* (Choose 2) * Setup the database in a private subnet with a security group which only allows outbound traffic. * Setup the database in a public subnet with a security group which only allows inbound traffic. * Setup the database in a local center and use a private gateway to connect to the application to the database. * Setup the database in a private subnet which connects to the Internet via a NAT Instance.

*Setup the database in a private subnet which connects to the Internet via a NAT Instance.* The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application, while maintaining back-end servers that aren't publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers.

*You have designed an application that uses AWS resources, such as S3, to operate and store users' documents. You currently use Cognito identity pools and User pools. To increase the usage and ease of signing up you decide adding social identity federation is the best path forward. When asked what the difference is between the Cognito identity pool and the federated identity providers (e.g. Google), how do you respond?* * They are the same and just called different things. * First you sign-in via Cognito then through a federated site, like Google. * Federated identity providers and identity pools are used to authorize services. * Sign-in via Cognito user pools and sign-in via federated identity providers are independent of one another.

*Sign-in via Cognito user pools and sign-in federated identity providers are independent of one another.* Sign-in through a third party (federation) is available in Amazon Cognito user pools. This feature is independent of federation through Amazon Cognito identity pools (federated identities) ----------------------------------- Option A is incorrect, as these are separate, independent authentication methods. Option B is incorrect, only one log-in event is needed, not two. Option C is incorrect, identity providers authenticate users, not authorize services.

*You have a small company that is only leveraging cloud resources like AWS Workspaces and AWS Workmail. You want a fully managed solution to provide user management and to set policies. Which AWS Directory Service would you recommend?* * AWS Managed Microsoft AD for its full-blown AD features and capabilities. * AD Connector for use with on-premises applications. * AWS Cognito for its scalability and customization. * Simple AD for limited functionality and compatibility with desired applications.

*Simple AD for limited functionality and compatibility with desired applications.* Simple AD is a Microsoft Active Directory-compatible directory from AWS Directory Service. You can use Simple AD as a standalone directory in the cloud to support Windows workloads that need basic AD features, compatible AWS applications, or to support Linux workloads that need LDAP services. ----------------------------------- Option A is incorrect, this is more functionality and feature-rich than you need give the desired applications. Option B is incorrect, you don't have on-premises applications, so AD Connector is not needed. Option C is incorrect, this is more functionality and feature-rich than you need given the desired applications.

*You are running your MySQL database using the Amazon RDS service. You have encrypted the database and are managing the keys using AWS Key Management System (KMS). You need to create an additional copy of the database in a different region. You have taken a snapshot of the existing database, but when you try to copy the snapshot to the other region, you are not able to copy it. What could be the reason?* * You need to copy the keys to the different region first. * You don't have access to the S3 bucket. * Since the database is encrypted and the keys are tied to a region, you can't copy. * You need to reset the keys before copying.

*Since the database is encrypted and the keys are tied to a region, you can't copy.* You cannot copy the snapshots of an encrypted database to another AWS region. KMS is a regional service, so you currently cannot copy things encrypted with KMS to another region. ------------------------- You can't copy the keys to a different region. This is not an S3 permission issue. Resetting the keys won't help since you are trying to copy the snapshot across different regions.

*Your company wants to use an S3 bucket for web hosting but have several different domains perform operations on the S3 content. In the CORS configuration, you want the following origin sites: http://mysite.com, https://secure.mysite.com, and https://yoursite.com. The site, https://yoursite.com, is not being allowed access to the S3 bucket. What is the most likely cause?* * Site https://yoursite.com was not correctly added as an origin site; instead included as http://yoursite.com * HTTPS must contain a specific port on the request, e.g. https://yoursite.com:443 * There's a limit of two origin sites per S3 bucket allowed. * Adding CORS automatically removes the S3 ACL and bucket policies.

*Site https://yoursite.com was not correctly added as an origin site; instead included as http://yoursite.com* The exact syntax must be matched. In some cases, wildcards can be used to help in origin URLs. ----------------------------------- Option B is incorrect, this is not required in allowing an origin domain be included; although it can be. Option C is incorrect, the limit is 100. Option D is incorrect, ACL and policies continue to apply when you enable CORS on the bucket. Verify that the origin header in your request matches at least one of the AllowedOrigin elements in the specified CORS Rule. For example, if you set CORS Rule to allow http://www.example.com, then both the https://www.example.com and http://www.example.com:80 origins in your request don't match the allowed origin in your configuration.

*How many copies of data does Amazon Aurora maintain in how many availability zones?* * Four copies of data across six AZs * Three copies of data across six AZs * Six copies of data across three AZs * Six copies of data across four AZs

*Six copies of data across three AZs* Amazon Aurora maintains six copies of data across three AZs ------------------------- It is neither four copies nor three copies; it is six copies of data in three AZs.

*You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?* * Reserved instances * Spot instances * Dedicated instances * On-demand instances

*Spot Instances* Since the above scenario is similar to batch processing jobs, the best instance type to use is a Spot Instance. Spot Instance are normally used in batch processing jobs. Since these jobs don't last for an entire year, they can be bid upon and and allocated and de-allocated as requested. Reserved Instances/Dedicated Instances cannot be used since this is not a 100% used application. There is no mention on continuous demand of work in the above scenario, hence there is no need to use On-Demand Instances. *Note:* 1) If you read the question once again, it has this point 'These instances will only be needed until the backlog is reduce', and also 'If this process is *interrupted*, the video gets transcoded by another instance based on the queueing system' So if your application or system is fault tolerant, Spot Instances can be used. 2) In the question, they mentioned that 'reduce this backlog by adding more instances', That means the application does not fully depend on the spot instances. These are used only for reducing the backlog load. 3) here we have to select the most cost-effective solution. Base on the first oint we conclude that the system is fault tolerant (Interruption acceptable)

*If you want your request to go to the same instance to get the benefits of caching the content, what technology can help provide that objective?* * Sticky session * Using multiple AZs * Cross-zone load balancing * Using one ELB per instance

*Sticky session* Using multiple AZs, you can distribute your load across multiple AZs, but you can't direct the request to go to same instance. Cross-zone load balancing is used to bypass caching. Using one ELB per instance is going to complicate things.

*You are hosting your application on your own data center. The application is hosted in a server that is connected to a SAN, which provides the block storage. You want to back up this data to AWS, and at the same time you also would like to retain your frequently accessed data locally. Which option should you choose that can help you to back up the data in the most resiliant way?* * Storage Gateway file gateway * Storage Gateway volume gateway in Cache mode * Storage Gateway volume gateway in stored mode * Back up the file directly to S3

*Storage Gateway volume gateway in Cache mode* Using Gateway volume gateway in cache mode, you can store your primary data in Amazon S3 and retain your frequently accessed data locally. ------------------------- Since you are using block storage via a SAN on-premise, a file gateway is not the right use case Using the AWS Storage Gateway volume gateway in stored mode will cache everything locally, which is going to increase the cost. You can back up the files directly to S3, but how are you going to access the frequent data locally?

*You've been tasked with the implementation of an offsite backup/DR solution. You'll only be responsible only for flat files and server backup. Which of the following would you include in your proposed solution?* * EC2 * Storage Gateway * Snowball * S3

*Storage Gateway* *Snowball* *S3* EC2 is a compute service not directly applicable to this providing backups. All others could be part of a comprehensive backup/DR solution.

*A company has a sales team and each member of this team uploads their sales figures daily. A Solutions Architect needs a durable storage solution for these documents and also a way to preserve documents from accidental deletions. What among the following choices would deliver protection against unintended user actions?* * Store data in an EBS Volume and create snapshots once a week. * Store data in an S3 bucket and enable versioning. * Store data in two S3 buckets in different AWS regions. * Store data on EC2 Instance storage.

*Store data in an S3 bucket and enable versioning.* Amazon S3 has an option for versioning as show below. Versioning is on the bucket level and can be used to recover prior versions of an object.

*A company needs to store images that are uploaded by users via a mobile application. There is also a need to ensure that a security measure is in place to avoid the data loss. What step should be taken for protection against unintended user actions?* * Store data in an EBS volume and create snapshots once a week. * Store data in an S3 bucket and enable versioning. * Store data on Amazon EFS storage. * Store data on EC2 instance storage.

*Store data in an S3 bucket and enable versioning.* Versioning is on the bucket level and can be used to recover prior versions of an object. ----------------------------------- Option A is invalid as it does not offer protection against accidental deletion of files. Option C is invalid as This is not idea one because multiple EC2 instances can access the file system. Option D is ephemeral.

*An application consists of EC2 Instances placed in different Availability Zones. The EC2 Instances sit behind an application load balancer. The EC2 Instances are managed via an Auto Scaling Group. There is a NAT Instance which is used for the EC2 Instances to download updates from the Internet. Which of the following is a bottleneck in the architecture?* * The EC2 Instances * The ELB * The NAT Instance * The Auto Scaling Group

*The NAT Instance* Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple Availability Zones and make it a part of an Auto Scaling Group.

*A company is planning on storing their files from their on-premises location onto the Simple Storage service. After a period of 3 months, they want to archive the files, since they would be rarely used. Which of the following would be the right way to service this requirement?* * Use an EC2 instance with EBS volumes. After a period of 3 months, keep on taking snapshots of the data. * Store the data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier. * Store the data on Amazon Glacier and then use Lifecycle policies to transfer the data to Amazon S3. * Use an EC2 instance with EBS volumes. After a period of 3 months, keep on taking copies of the volume using Cold HDD volume type.

*Store the data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier* To manage your objects so that they are stored cost effectively throughput their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: *Transition action* - Define when objects transition to another storage class. *Expiration action* - Define when objects expire. Amazon S3 deletes expired objects on your behalf. ----------------------------------- Option A and D are incorrect since using EBS volumes is not the right storage option for this sort of requirement. Option C is incorrect since the files should be initially stored in S3.

*Your application is uploading several thousands of files every day. The file sizes range from 500MB to 2GB. Each file is then processed to extract metadata, and the metadata processing takes a few seconds. There is no fixed uploading frequency. Sometimes a lot of files are uploaded in particular hour, sometimes only a few files are uploaded in particular hour, and sometimes there is no upload happening over a period of a few hours. What is the most cost-effective way to handle this issue?* * Use an SQS queue to store the file and then use an EC2 instance to extract the metadata. * Use EFS to store the file and then use multiple EC2 instances to extract the metadata. * Store the file in Amazon S3 and then use S3 event notification to invoke an AWS Lambda function for extracting the metadata. * Use Amazon Kinesis to store the file and then use AWS Lambda to extract the metadata.

*Store the file in Amazon S3 and then use S3 event notification to invoke an AWS Lambda function for extracting the metadata* SInce you are looking for cost savings, the most cost effective way would be to upload the files in S3 and then invoke an AWS Lambda function for metadata extraction. ------------------------- They will all result in a high cost compared to option C. Technically you can use all these options to solve the problem, but it won't serve the key criterion of cost optimization that you are looking for.

*An application allows a manufacturing site to upload files. Each uploaded 3 GB file is processed to extract metadata, and this process takes a few seconds per file. The frequency at which the uploads happen is unpredictable. For instance, there may be no updates for hours, followed by several files being uploaded concurrently. What architecture addresses this workload in the most cost-efficient manner?* * Use a Kinesis Data Delivery Stream to store the file. Use Lambda for processing. * Use an SQS queue to store the file, to be accessed by a fleet of EC2 Instances. * Store the file in an EBS volume, which can be then accessed by another EC2 Instance for processing. * Store the file in an S3 bucket. Use Amazon S3 event notification to invoke a Lambda function for file processing.

*Store the file in an S3 bucket. Use Amazon S3 event notification to invoke a Lambda function for file processing.* You can create a Lambda function with the code to process the file. You can then use an Even Notification from the S3 bucket to invoke the Lambda function whenever a file is uploaded. ----------------------------------- Option A is incorrect. Kinesis is used to collect, process and analyze real-time data. The frequency of updates are quite unpredictable. By default, SQS uses short polling. In this case it will lead to the cost factor going up since we are getting messages in an unpredictable manner that many a time will be returning empty responses. Hence option B is not a solution.

*A company wants to store their documents in AWS. Initially, these documents will be used frequently, and after a duration of 6 months, they will need to be archived. How would you architect this requirement?* * Store the files in Amazon EBS and create a Lifecycle Policy to remove the files after 6 months. * Store the files in Amazon S3 and create a Lifecycle Policy to archive the fiels after 6 months. * Store the files in Amazon Glacier and create a Lifecycle Policy to remove the files after 6 months. * Store the files in Amazon EFS and create a Lifecycle Policy to remove the files after 6 months.

*Store the files in Amazon S3 and create a Lifecycle Policy to archive the files after 6 months.* Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. *Tranistion actions* - In which you define when objects transition to another storage class. For example, you may choose to transition objects to STANDARD_IA (Infrequent Access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration actions* - In which you specify when the objects expire. Amazon S3 deletes the expired objects on your behalf.

*You are designing a media-streaming application, and you need to store hundreds of thousands of videos. Each video will have multiple files associated with it for storing the different resolutions (480p, 720p, 1080p, 4K, and so on). The videos need to be stored in durable storage.* * Store the main video in S3 and the different resolution files in Glacier. * Store the main video in EBS and the different resolution files in S3. * Store the main video in EFS and the different resolution files in S3. * Store the main video in S3 and the different resolution files in S3-IA.

*Store the main video in S3 and the different resolution files in S3-IA* S3 provides 99.9999999 percent of durability, so that is the best choice. ------------------------- If you store the files in EBS or EFS, the cost is going to be very high. You can't store these files in Glacier since it is an archival solution.

*A Solutions Architect is designing a highly scalable system to track records. These records must remain available for immediate download for up to three months and then be deleted. What is the most appropriate decision for this use case?* * Store the file in Amazon EBS and create a Lifecycle Policy to remove files after 3 months. * Store the file in Amazon S3 and create a Lifecycle Policy to remove files after 3 months. * Store the files in Amazon Glacier and create a Lifecycle Policy to remove files after 3 months. * Store the files in Amazon EFS and create a Lifecycle Policy to remove files after 3 months.

*Store thefiles in Amazon S3 and create a Lifecycle Policy to remove files after 3 months.* Lifecycle configuration enables you to specify the Lifecycle Management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: *Transition actions* - In which you define when the object transition to another storage class. For example, you may choose to transition objects to STANDARD_IA (IA, for infrequent access) storeage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration action* - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. ----------------------------------- Option A is invalid since the records need to be stored in a highly scalable system. Option C is invalid since the records must be available for immediate download. Option D is invalid because it does not have the concept of a Lifecycle Policy.

*You want to create a copy of the existing Redshift cluster in a different region. What is the fastest way of doing this?* *Export all the data to S3, enabled cross-regional replication in S3, and load the data to a new Redshift cluster from S3 in a different region. * Take a snapshot of Redshift in a different region by enabling cross-region snapshots and create a cluster using that. * Use the database Migration Service. * Encrypt the Redshift cluster.

*Take a snapshot of Redshift in a different region by enabling cross-region snapshots and create a cluster using that.* Redshift allows you to configure a cross-regional snapshot via which you can clone the existing Redshift cluster to a different region. ------------------------- If you export all the data to S3, move the data to a different region, and then again load the data to Redshift in a different region, the whole process will take a lot of time, and you are looking for the fastest way. You can't use the Database Migration Service for doing this. If you encrypt a Redshift cluster, it is going to provide encryption but has nothing to do with cloning to a different region.

*A company has a set of EC2 Instances that store critical data on EBS Volumes. There is a fear from IT Supervisors that if data on the EBS Volumes is lost, then it could result in a lot of effort to recover the data from other sources. Which of the following would help alleviate this concern in an economical way?* * Take regular EBS Snapshots. * Enable EBS Volume Encryption. * Create a script to copy data to an EC2 Instance Store. * Mirror data across 2 EBS Volumes.

*Take regular EBS Snapshots.* You can back up the data on your Amazon EBS Volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was take) to a new EBS volume. ----------------------------------- Option B is incorrect because it does not help in durability of EBS Volumes. Option C is incorrect since EC2 Instance stores are not durable. Option D is incorrect since mirroring data across EBS Volumes in inefficient when you already have an option for EBS Snapshots.

*You application is hosted on EC2 instances, and all the data is stored in an EBS volume. The EBS volumes must be durably backed up across multiple AZs. What is the most resiliant way to back up the EBS volumes?* * Encrypt the EBS volume. * Take regular EBS snapshots. * Mirror data across two EBS volumes by using RAID. * Write an AWS Lambda function to copy all the data from EBS to S3 regularly.

*Take regular EBS snapshots* By using snapshots, you can back up the EBS volume, and you can create the snapshot in a different AZ. ------------------------- A is incorrect because encrypting the volume is different from backing up the volume. Even if you encrypt the volume, yo ustil lneed to take a backup of it. C is incorrect because even if you mirror the data across two EBS volumes by using RAID, you will have high availability of the data but not a backup. Remember, the backup has to be across AZs. If you use RAID and provide high availability to your EBS volumes, that will still be under the same AZ since EBS volumes can't be mounted across AZs. D is incorrect because although you can take the backup of all the data in S3 from an EBS volume, that is not backing up the EBS volume. Backing up the volume means if your primary volume goes bad or goes down, you should be able to quickly mount the volume from a backup. If you have a snapshot of the EBS volume, you can quickly mount it and have all the data in it. If you back up the data to S3, you need to create a new volume and then copy all the data from S3 to the EBS volume.

*Your application is hosted on EC2 instances, and all the data is stored in an EBS volume. The EBS volumes must be durably backed up across multiple AZs. What is the most resilient way to back up the EBS volumes?* * Encrypt the EBS volume. * Take regular EBS snapshots. * Mirror data across two EBS volumes by using RAID. * Write an AWS Lambda function to copy all the data from EBS to S3 regularly.

*Take regular EBS snapshots.* By using snapshots, you can back up the EBS volume, and you can create the snapshot in a different AZ. ------------------------- A is incorrect because incrypting the volume is different from backing up the volume. Even if you encrypt the volume, you still need to take a backup of it. C is incorrect because even if you mirror the data across two EBS volumes by using RAID, you will have high availability of the data but not a backup. Remember the backup has to be across AZs. If you use RAID and provide high availability to your EBS volumes, that will still be under the same AZ since EBS volumes can't be mounted across AZs. D is incorrect because although you can take the backup of all the data to S3 from an EBS volume, that is not backing up the EBS volume. Backing up the volume means if your primary volume goes bad or goes down, you should be able to quickly mount the volume from a backup. If you ahve a snapshot of the EBS volume, you can quickly mount it and have all the data in it. If you back up the data to S3, you need to create a new volume and then copy all the data from S3 to the EBS volume.

*An application currently stores all its data on Amazon EBS Volumes. All EBS Volumes must be backed up durably across multiple Availability Zones. What is the MOST resilient and cost-effective way to back up volumes?* * Take regular EBS snapshots. * Enable EBS volume encryption. * Create a script to copy data to an EC2 Instance store. * Mirror data scross 2 EBS volumes.

*Take regular EBS snapshots.* You can back up the data on the Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental beackups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume. ----------------------------------- Option B is incorrect, because it does not help the durability of EBS Volumes. Option C is incorrect, since EC2 Instance stores are not durable. Option D is incorrect, since mirroring data across EBS volumes is inefficent in comparison with existing option for EBS snapshots.

*You are running your critical application on EC2 servers and using EBS volumes to store the data. The data is important to you, and you need to make sure you can recover the data if something happens to the EBS volume. What should you do to make sure you are able to recover the data all the time?* * Write a script that can copy the data to S3. * Take regular snapshots for the EBS volume. * Install a Kinesis agent on an EC2 server that can back up all the data to a different volume. * Use an EBS volume with PIOPS.

*Take regular snapshots for the EBS volume.* By taking regular snapshots, you can back up the EBS volumes. The snapshots are stored in S3, and they are always incremental, which means only the changed blocks will be saved in the next snapshot. ------------------------- You can write a script to copy all the data to S3, but when you already have the solution of snapshots available, then why reinvent the wheel? Using a Kinesis agent for backing up the EBS volume is not the right use case for Kinesis. It is used for ingesting, streaming, or batching data. An EBS volume with PIOPS is going to provide great performance, but the question asks for a solution for backing up.

*What happens to the EIP address when you stop and start an instance?* * The EIP is released to the pool and you need to re-attach. * The EIP is released temporarily during the stop and start. * The EIP remains associated with the instance. * The EIP is available for any other customer.

*The EIP remains associated with the instance.* Even during the stop and start of the instance, the EIP is associated with the instance. It gets detached when you explicitly terminate an instance.

*What happens when the Elastic Load Balancing fails the health check?* (Choose the best answer.) * The Elastic Load Balancing fails over to a different load balancer. * The Elastic Load Balancing keeps on trying until the instance comes back online. * The Elastic Load Balancing cuts off the traffic to that instance and starts a new instance. * The load balancer starts a bigger instance.

*The Elastic Load Balancing cuts off the traffic to that instance and starts a new instance.* When Elastic Load Balancing fails over, it is an internal mechanism that is transparent to end users. Elastic Load Balancing keeps on trying, but if the instance does not come back online, it starts a new instance. It does not wait indefinitely for that instance to come back online. The load balancer starts the new instance, which is defined in the launch configuration. It is going to start the same type of instance unless you have manually changed the launch configuration to start a bigger type of instance.

*An Instance is launched into a VPC subnet with the network ACL configured to allow all outbound traffic and deny all inbound traffic. The instance's security group is configured to allow SSH from any IP address. What changes need to be made to allow SH access to the instance?* * The Outbound Security Group needs to be modified to allow outbound traffic. * The Inbound Network ACL needs to be modified to allow inbound traffic. * Nothing, it can be accessed from any IP address using SSH. * Both the Outbound Secuirty Group and Outbound Network ACL need to be modified to allow outbound traffic.

*The Inbound Network ACL needs to be modified to allow inbound traffic.* The reason why Network ACL has to have both an Allow for Inbound and Outbound is because network ACLs are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups, response are stateful. So, if an incoming request is granted, by default an outgoing request will also be granted. ----------------------------------- Option A and D are invalid because Security Groups are stateful. Here, any traffic allowed in the Inbound rule is allowed in the Outbound rule too. Option C is incorrect.

*An application consists of the following architecture: a. EC2 Instances in multiple AZ's behind an ELB b. The EC2 Instances are launched via an Auto Scaling Group. c. There is a NAT instance which is used so that instances can download updates from the Internet. Which of the following is a bottleneck in the architecture?* * The EC2 Instances * The ELB * The NAT Instance * The Auto Scaling Group

*The NAT Instance* Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple Availability Zones and make it part of the Auto Scaling group.

*Your company has been hosting a static website in an S3 bucket for several months and gets a fair amount of traffic. Now you want to registered .com domain to serve content from the bucket. Your domain is reached via https://www.myfavoritedomain.com. However, any traffic requested through https://www.myfavoritedomain.com is not getting through. What is the most likely cause of this disruption?* * The new domain is not registered in CloudWatch monitoring. * The S3 bucket has not been configured to allow Cross Origin Resource Sharing (CORS) * The S3 bucket was not created in the correct region. * Https://www.myfavoritedomain.com wasn't registered with AWS Route 53 and therefore won't work.

*The S3 bucket has not been configured to allow Cross Origin Resource Sharing (CORS)* In order to keep your content safe, your web browser implements something called the same origin policy. The default policy ensures that scripts and other active content loaded from one site or domain cannot interfere or interact with content from another location without an explicit indication that this is desired behavior. ----------------------------------- Option A is incorrect, Enabling Cloudwatch doesn't affect Cross Origin Resource Sharing (CORS) Option C is incorrect, S3 bucket aren not region-specific. Option D is incorrect, the domain can be registered with any online registrar, not just AWS Route 53.

*Your data warehousing company has a number of different RDS instances. You have a medium size instance with automated backups switched on and a retention period of 1 week. Once of your staff carelessly deletes this database. Which of the following apply.* (Choose 2) * The automated backups will be retained for 2 weeks and then deleted after the 2 weeks has expired. * The automatic backups are deleted when the instance is deleted. * A final snapshot will be created upon deleted automatically. * A final snapshot MAY have been created when the instance was deleted, depending on whether the 'SkipFinalSnapshot' parameter was set to 'False'.

*The automatic backups are deleted when the instance is deleted.* *A final snapshot MAY have been created when the instance was deleted, depending on whether the 'SkipFinalSnapshot' parameter was set to 'False'.* Under normal circumstances, all automatic backups of an RDS instance are deleted upon termination. However, it is possible to can create a final DB Snapshot upon deletion. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date. Amazon RDS retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted.

*A company has a lot of data hosted on their On-premises infrastructure. Running out of storage space, the company wants a quick win solution using AWS. Which of the following would allow easy extension of their data infrastructure to AWS?* * The company could start using Gateway Cached Volumes. * The company could start using Gateway Stored Volumes. * The company could start using the Simple Storage Service. * The company could start using Amazon Glacier.

*The company could start using Gateway Cached Volumes.* Volume Gateway and Cached Volumes can be used to start storing data in S3. You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. *Note:* The question states that they are running out of storage space and they need a solution to store data with AWS rather than a backup. So for this purpose, a gateway-cached volumes are appropriate which will help them to avoid scaling their on-premises data center and allows them to store on AWS storage service while having the most recent files available for them at low latency. This is the different between Cached and Stored volumes: *Cached volumes* - You store your data in S3 and retain a copy of frequently accessed data subsets locally. Cached volumes offer substantial cost savings on primary storage and 'minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.' *Stored volumes* - If you need low-latency access to your entire data set, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. "This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2." For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. As described in the answer: The company wants a quick solution to store data with aws avoiding scaling the on-premise setup rather than backup the data. In the question, they mentioned that *A company has a lot of data hosted on their On-premises infrastructure.* From On-premises to Cloud infrastructure, you can use AWS storage gateways. Option C is talking about the data store. But here the requirement is (How) to transfer or migrate your data from on-premises to Cloud infrastructure. So there is no clear process mentioned in Option C.

*What are the characteristics of AMI that are backed up by the instance store?* (Choose two.) * The data persists even after the instance reboot. * The data is lost when the instance is shut down. * The data persists when the instance is shut down. * The data persists when the instance is terminated.

*The data persists even after the instance reboot.* *The data is lost when the instance is shut down.* If an AMI is backed up by an instance store, you lose all the data if the instance is shut down or terminated. However, the data persists if the instance is rebooted.

*You currently manage a set web servers with public IP addresses. These IP addresses are mapped to domain names. There was an urgent maintenance activity that had to be carried out on the servers and the server had to be stopped and restarted. Now the web application hosted on these EC2 Instances are not accessible via the domain names configured earlier. Which of the following could be a reason for this?* * The Route 53 hosted zone needs to be restarted. * The network interfaces need to initialized again. * The public IP addresses need to associated to the ENI again. * The public addresses have changed after the instance was stopped and started.

*The public IP addresses have changed after the instance was stopped and started.* By default, the public IP address of an EC2 Instance is released after the instance is stopped and started. Hence, the earlier IP address which was mapped to the domain names would have become invalid now.

*You are using CloudWatch to generate the metrics for your application. You have enabled a one-minute metric and are using the CloudWatch logs to monitor the metrics. You see some slowness in performance in the last two weeks and realize that you made some application changes three weeks back. You want to look at the data to see how the CPU utilization was three weeks back, but when you try looking at the logs, you are not able to see any data. What could be the reason for that?* * You have accidentally deleted all the logs. * The retention for CloudWatch logs is two weeks. * The retention for CloudWatch logs is 15 days. * You don't have access to the CloudWatch logs anymore.

*The retention for CloudWatch logs is 15 days.* For the one-minute data point, the retention is 15 days; thus, you are able to see any logs older than that. ----------------------------------- Since there is no data, you are not able the see the metrics. This is not an access issue because three weeks back you made some application changes, not AWS infrastructure changes.

*Using the shared security model, the customer is responsible for which of the following?* (Choose two.) * The security of the data running inside the database hosted in EC2 * Maintaining the physical security of the data center * Making sure the hypervisor is patched correctly * Making sure the operating system is patched correctly

*The security of the data running inside the database hosted in EC2* *Making sure the operating system is patched correctly* The customer is responsible for the security of anything running on the hypervisor, and therefore the operating system and the security of data are the customer's responsibility.

*Which of the following statements are true for Amazon Aurora?* (Choose three.) * The storage is replicated at three different AZs. * The data is copied at six different places. * It uses a quorum-based system for reads and writes. * Aurora supports all the commercial databases.

*The storage is replicated at three different AZs.* *The data is copied at six different places.* *It uses a quorum-based system for reads and writes.* Amazon Aurora supports only MySQL and PostgreSQL. It does not support commercial databases.

*You need to take a snapshot of the EBS volume. How long will the EBS remain unavailable?* * The volume will be available immediately. * EBS magnetic drive will take more time than SSD volumes. * It depends on the size of the EBS volume. * It depends on the actual data stored in the EBS volume.

*The volume will be available immediately.* The volumes are available irrespective of the time it takes to take the snapshot.

*How many EC2 instances can you have in an Auto Scaling group?* * 10. * 20. * 100. * There is no limit to the number of EC2 instances you can have in the Auto Scaling group

*There is no limit to the number of EC2 instances you can have in the Auto Scaling group* There is no limit to the number of EC2 instances you can have in the Auto Scaling group. However, there might an EC2 limitation in your account that can be increased by logging a support ticket.

*Your app uses AWS Cognito Identity for authentication and stores user profiles in a User Pool. To expand the availability and ease of signing in to the app, your team is requesting advice on allowing the use of OpenID Connect (OIDC) identity providers as additional means of authenticating users and saving the user profile information. What is your recommendation on OIDC identity providers?* * This is supported, along with social and SAML based identity providers. * This is not supported, only social identity providers can be integrated into User Pools. * If you want OIDC identity providers, then you must include SAML and social based supports as well. * It's too much effort to add non-Cognito authenticated user information to a User Pool.

*This is supported, along with social and SAML based identity providers.* OpenID Connect (OIDC) identity providers (IdPs) (like Salesforce or Ping Identity) are supported in Cognito, along with social and SAML based identity providers. You can add OIDC IdP to your pool in the AWS Management Console, with the AWS CLI, or by using the user pool API method CreateIdentityProvider. ----------------------------------- Option B is incorrect, Cognito supports more than just social identity providers, including OIDC, SAML, and its own identity pools. Option C is incorrect, You can add any combination of federated types, you don't have to add them all. Option D is incorrect, while there is additional coding to develop this, the effort is most likely not too great to add the feature.

*True or False: There is a limit to the number of domain names that you can manage using Route 53.* *True and False. With Route 53, there is a default limit of 50 domain names. However, this limit can be increased by contacting AWS support. * False. By default, you can support as many domain names on Route 53 as you want. * True. There is a hard limit of 10 domain names. You cannot go above this number.

*True and False. With Route 53, there is a default limit of 50 domain names. However, this limit can be increased by contacting AWS support.*

*RDS Reserved instances are available for multi-AZ deployments.* * False * True

*True*

*Placement Groups can be created across 2 or more Availability Zones.* * True * False

*True* Technically they are called Spread placement groups. Now you can have placement groups across different hardware and multiple AZs.

*You are running an application in the us-east 1 region. The application needs six EC2 instances running at any given point in time. With five availability zones available in that region (us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e), which of the following deployment models is going to provide fault tolerance and a cost-optimized architecture if one of the AZs goes down?* * Two EC2 instances in the us-east-1a, two EC2 instance in the us-east-1b, and two EC2 instances in us-east-1c. * Two EC2 instances in the us-east-1a, two EC2 instances in the us-east-1b, two EC2 instances in the us-east-1c, two EC2 instances in us-east-1d, and two EC2 instances in us-east-1e * Three EC2 instances in us-east-1a, three EC2 instances in us-east-1b, and three EC2 instances in us-east-1C * Two EC2 instances in us-east-1a, two EC2 instances in us-east-1b, two EC2 instances in us-east-1c, and two EC2 instances in us-east-1d * Six EC2 instances in us-east-1a and six EC2 instances in us-east-1b

*Two EC2 instances in us-east-1a, two EC2 instances in us-east-1b, two EC2 instances in us-east-1c, and two EC2 instances in us-east-1d* This is more of a mathematical question. As per this question, you should always be up and running on six EC2 server even if you lose one AZ. With D you will be running only eight servers at any point in time. Even if you lost an AZ, you will still be running with six EC2 instances. ------------------------- A is incorrect because at any point in time you will be running with six EC instances, but if one of the AZ goes down, you will be running with only four EC2 instances. B is incorrect because at any time you will be running a total of ten EC2 servers since you are using five AZs and two EC2 servers in each AZ. If you lose an AZ, you will be running with eight EC2 servers. Though this meets the business requirement that you are running six EC2 servers even if an AZ goes down, this is not a cost-optimized solution. C is incorrect because at any point in time you will be running nine EC2 instances, and if an AZ goes down, you will be able to run six EC2 servers. This meets the business objective of running six EC2 servers at any time, but lets evaluate another option to see whether we can find a better cost optimized solution. With D, you will be running a total of eight EC2 servers at any point in time, and if an AZ fails, you will be running six EC2 servers. So far, with option D, the total number of servers always running is eight compared to nine, which is the most cost-optimized solution. With option E, you will be running 12 instances at any point in time, which increases the cost.

*You have created a VPC in Paris region, and one public subnet in each Availability Zone eu-west-3a, eu-west-3b, and eu-west-3c of the same region, and each subnet having one ECX2 instance inside it. Now you want to launch ELB nodes in two EZ's out of three available. How many private IP addresses will be used by ELB nodes at initial launch of ELB.* * Three nodes in each AZ will consume three private IP addresses * Two nodes in two AZ's will consume two private IP addresses * Two nodes in each AZ's and one for the ELB service, hence total three IP addresses will be consumed. * The ELB service picks the private IP addresses only when the traffic flows through the Elastic Load Balancer.

*Two nodes in two AZ's will consume two private IP addresses.* Whenever we launch the ELB, the ELB service will create a node in each subnet. ----------------------------------- Option A is incorrect because problem statement is we would like to launch the ELB nodes in just two subnets out of three. So the third subnet need not, have the ELB node inside it, and hence no IP address will be consumed. Option C is incorrect, whenever we launch an ELB, the ELB service won't consume an IP address, it's the ELB node which consumes IP address. Option D is incorrect, as the IP addresses are assigned to the nodes at the initial launch of the ELB service.

*You have created a Redshift cluster without encryption, and now your security team wants you to encrypt all the data. How do you achieve this?* * It is simple. Just change the settings of the cluster and make it encrypted. All the data will be encrypted. * Encrypt the application that is loading data in Redshift; that way, all the data will already be encrypted. * Run the command encryption cluster from a SQL prompt on Redshift client. * Unload the data from the existing cluster and reload it to a new cluster with encryption on.

*Unload the data from the existing cluster and reload it to a new cluster with encryption on* If you launch a cluster without encryption, the data remains unencrypted during the life of the cluster. If at a later time you decide to encrypt the data, then the only way is to unload your data from the existing cluster and reload it in a new cluster with the encryption setting. ------------------------- You cannot change a RedShift cluster on the fly to make it encrypted from non encrypted. Applications can encrypt the application data, but what about the data from other sources or old data in the cluster? There is no such command.

*A company has a set web servers. It is required to ensure that all the logs from these web servers can be analyzed in real time for any sort of threat detection. Which of the following would assists in this regard?* *Upload all the logs to the SQS Service and then use EC2 Instances to scan the logs. * Upload the logs to Amazon Kinesis and then analyze the logs accordingly. * Upload the logs to CloudTrail and then analyze the logs accordingly. * Upload the logs to Glacier and then analyze the logs accordingly.

*Upload the logs to Amazon Kinesis and then analyze the logs accordingly.* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of you application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.

*Your company has a requirement to host a static web site in AWS. Which of the following steps would help implement a quick and cost-effective solution for this requirement?* (Choose 2) * Upload the static content to an S3 bucket. * Create EC2 Instance and install a web server. * Enable web site hosting for the S3 bucket. * Upload the code to the web server on the EC2 Instnace.

*Upload the static content to an S3 bucket.* *Enable web site hosting for the S3 bucket.* S3 would be an ideal, cost-effective solution for the above requirement. You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

*Your company has a set of EC2 Instances hosted in AWS. There is a mandate to prepare for disasters and come up with the necessary disaster recovery procedures. Which of the following would help in mitigating the effects of a disaster for the EC2 Instances?* * Place an ELB in front of the EC2 Instances. * Use AutoScaling to ensure the minimum number of instances are always running. * Use CloudFront in front of the EC2 Instances. * use AMIs to recreate the EC2 Instances in another region.

*Use AMIs to recreate the EC2 Instances in another region.* You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be created form the AMI. ----------------------------------- Option A and B are good for fault tolerance, but cannot help completely in disaster recoery for EC2 Instances. Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is hosted on the EC2 Intance. For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, option A, B, and C are not feasible solutions.

*Which product should you choose if you want to have a solution for versioning your APIs without having the pain of managing the infrastructure?* * Install a version control system on EC2 servers * Use Elastic Beanstalk * Use API Gateway * Use Kinesis Data Firehose

*Use API Gateway* EC2 servers and Elastic Beanstalk both need you to manage some infrastructure; Kinesis Data Firehose is used for ingesting data.

*A company stores its log data in S3 bucket. There is a current need to have search capabilities available for the data in S3. how can this be achieved in an efficient and ongoing manner?* (Choose 2) * Use AWS Athena to query the S3 bucket. * Create a Lifecycle Policy for the S3 bucket. * Load the data into the Amazon ElasticSearch. * Load the data into Glacier.

*Use AWS Athena to query the S3 bucket.* *Load the data into Amazon ElasticSearch.* Amazon Athena is a service that enables a data analyst to perform interactive queries in the AWS public cloud on data stored in AWS S3. Since it's a serverless query service, an analyst doesn't need to manage any underlying compute infrastructure to use it.

*Your company has a set of resources hosted on the AWS Cloud. As part of a new governing model, there is a requirement that all activity on AWS resources should be monitored. What is the most efficient way to have this implemented?* * Use VPC Flow Logs to monitor all activity in your VPC. * Use AWS Trusted Advisor to monitor all of your AWS resources. * Use AWS Inspector to inspect all of the resources in your account. * Use AWS CloudTrail to monitor all API activity.

*Use AWS CloudTrail to monitor all API activity.* AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. The event history simplifies security analysis, resource change tracking, and troubleshooting. Visibility into yoru AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of trails you create, and control how users view CloudTrail events.

*A company's requirement is to have a Stack-based model for its resource in AWS. There is a need to have different stacks for the Development and Production environments. Which of the following can be used to fulfill this required methodology?* * Use EC2 tags to define different stack layers for your resources. * Define the metadata for the different layers in DynamoDB. * Use AWS OpsWorks to define the different layers for your application. * Use AWS Config to define the different layers for your application.

*Use AWS Config to define the different layers for your application.* AWS OpsWorks Stack lets you manage applications and servers on AWS and on-premises. With OpsWorks Stack, you can model your application as a stack containing layers, such as load balancing, and database, and application server. You can deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases. A stack is basically a collection of instances that are managed together for serving a common task. Consider a sample stack whose purpose is to serve web applications, it will comprised of the following instances. - A set of application server instances, each of which handles a portion of the incoming traffic. - A load balancer instance, which takes incoming traffic and distributes it across the application servers. - A database instance, which serves as a back-end data store for the application servers. A common practice is to have multiple stacks that represent different environments; typical set of stacks consists of: - A development stack to be used by the developers to add features, fix bugs, and perform other development and maintenance tasks. - A staging stack to verify updates or fixes before exposing them publicly. - A production stack, which is the public-facing version that handles incoming requests from users.

*You want to connect the applications running on your on-premise data center to the AWS cloud. What is the secured way of connecting them to AWS?* (Choose two.) * Use AWS Direct Connect * VPN * Use an elastic IP * Connect to AWS via the Internet

*Use AWS Direct Connect* *VPN* By using Direct Connect or VPN, you can securly connect to AWS from your data center. ------------------------- An elastic IP address provides a static IP address that has nothing to do with connecting the on-premise data center with AWS. Connecting via the Internet won't be secure.

*A company is using a Redshift cluster to store their data warehouse. There is a requirement from the Internal IT Security team to encrypt the data for the Redshift database. How can this be achieved?* *Encrypt the EBS volumes of the underlying EC2 Instances. * Use AWS KMS Customer Default master key. * Use SSL/TLS for encrypting the data. * Use S3 encryption.

*Use AWS KMS Customer Default master key.* Amazon Redshift usages a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or hardware security module (HSM) to manage the top-level encryption key in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

*A company is planning testing a large set of IoT enabled devices. These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose?* * Use AWS EMR to store and process the streams. * Use AWS Kinesis to process and analyze the data. * Use AWS SQS to store the data. * Use SNS to store the data.

*Use AWS Kinesis to process and analyze the data.* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights an react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Option B: Amazon Kinesis can be used to store, process and analyze real-time streaming data. ----------------------------------- Option A: Amazon EMR can be used to process applications with data-intensive workloads. Option C: SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Option D: SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and client.,

*A company with a set of Admin jobs (.NET core) currently setup in the C# programming language, is moving their infrastructure to AWS. Which of the following would be efficient means of hosting the Admin related jobs in AWS?* * Use AWS DynamoDB to store the jobs and then run them on demand. * Use AWS Lambda functions with C# for the Admin jobs. * Use AWS S3 to store the jobs and then run them on demand. * Use AWS Config functions with C# for the Admin jobs.

*Use AWS Lambda functions with C# for the Admin jobs.* The best and most efficient option is to host the jobs using AWS Lambda. This service has the facility to have the code run in the C# programming language. AWS Lambda is a compute service that lets you run code without provisioining or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requrests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration.

*You are a developer and would like to run your code without provisioning or managing servers. Which service should you chose to do so while making sure performs well operationally?* * Launch an EC2 server and run the code from there * Use API Gateway * Use AWS S3 * Use AWS Lambda

*Use AWS Lambda* Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. There is no charge when your code is not running. ------------------------- If you run the code on an EC2 server, you still need to provision an EC2 server that has the overhead of managing the server. You can't run code from API Gateway. AWS S3 is an object store that can be used for running code.

*You are running an application on Amazon EC2 instances, and that application needs access to other AWS resources. You don't want to store any long-term credentials on the instance. What service should you use to provide the short-term security credentials to interact with AWS resources?* * Use an IAM policy. * Use AWS Config * Use AWS Security Token Service (STS). * AWS CloudTrail.

*Use AWS Security Token Service (STS)* AWS Security Token Service (AWS STS) is used to create and provide trusted users with temporary security credentials that can control access to your AWS resources. ----------------------------------- An IAM policy can't provide temporary security credentials, AWS Config is used to monitor any change in the AWS resource, and AWS CloudTrail is used to monitor the trail on API calls.

*A customer needs coporate IT governance and cost oversight of all AWS resources consumed by its divisions. Each division has their own AWS account and there is a need to ensure that the security policies are kept in place at the Account level. How can you achieve this?* (Choose 2) * Use AWS Organizations * Club all divisions under a single account instead * Use IAM Policies to segregate access * Use Service control policies

*Use AWS organizations* *Use Service control policies* With AWS Organizations, you can centrally managed policies across multiple AWS accounts without having to use custom scripts and manual processes. For example, you can apply service control policies (SCPs) across multiple AWS accounts that are members of an organizations. SCPs allow you to define which AWS service APIs can and cannot be executed by AWS Identity and Access Management (IAM) entities (such as IAM users and roles) in your organization's member AWS account. SCPs are created and applied from the master account, which is the AWS account that you used when you created your organization. ----------------------------------- Option B is incorrect since the question mentions that you need to use separate AWS accounts. Option C is incorrect since you need to use service control policies "AWS IAM doesn't provide the facility to define access permissions to that minute level i.e.. which AWS service APIs can and cannot be executed by IAM entities.

*You are creating an application, and the application is going to store all the transitional data in a relational database system. You are looking for a highly available solution for a relational database for your application. Which option should you consider for this?* * Use Redshift * Use DynamoDB * Host the database on EC2 servers. * Use Amazon Aurora.

*Use Amazon Aurora* Amazon Aurora is the solution for relational databases that comes with built-in high availability. ------------------------- Redshift is the data-warehouse offering of AWS. DynamoDB is a NoSQL database, which is not a relational database. You can host the database in EC2, but you will have the overhead of managing the same whereas Aurora takes away all the management overhead from you.

*You are planning to run a mission-critical online order-processing system on AWS, and to run that application, you need a database. The database must be highly available, with high performing, and you can't lose any data. Which database meets these criteria?* * Use an Oracle database hosted in EC2. * Use Amazon Aurora. * Use Redshift. * Use RDS MySQL.

*Use Amazon Aurora* Amazon Aurora stores six copies of the data across three AZs. It provides five times more performance than RDS MySQL. ----------------------------------- If you host an Oracle database on EC2 servers, it will be much more expensive compared to Amazon Aurora, and you need to manage it manually. Amazon Redshift is solution for a data warehouse.

*You are working as an AWS Architect for a retail company using AWS EC2 instance for a web application. Company is using Provisioned IOPS SSD EBS volumes to store all product database. This is a critical database & you need to ensure that appropriate backups are accomplished every 12 hours. Also, you need to ensure that storage space is optimally used for storing all these snapshots removing all older files. Which of the following can help to meet this requirement with least management overhead?* *Manually create snapshots & delete old snapshots for EBS volues as this is a critical data. * Use Amazon CloudWatch events to initiate AWS Lambda which will create snapshot of EBS volumes along with deletion of old snapshots. * Use Amazon Data Lifecycle manager to schedule EBS snapshots and delete old snapshots as per retention policy. * Use Third party tool to create snapshots of EBS volumes along with deletion of old snapshots.

*Use Amazon Data Lifecycle Manager to schedule EBS snapshots and delete old snapshots as per retetion policy.* Amazon Data Lifecycle Manager can be used for creation, retention & deletion of EBS snapshots. It protects critical data by initiating backups of Amazon EBS volumes at selected intervals along with storing & deletion of old snapshots to save storage space & cost. ----------------------------------- Option A is incorrect as this will result in additional admin work & there can be risk of losing critical data due to manual errors. Option B is incorrect as for this we will need to additional config changes in CloudWatch & AWS Lambda. Option D is incorrect as this will result in additional cost to maintain a third-party software.

*As a Solutions Architect for a multinational organization having more than 150,000 employees, management has decided to implement a real time analysis for their employees time spent in offices across the globe. You are tasked to design a architecture which will receive the inputs from 10,000+ sensors with swipe machine sending in an out data from across the globe, each sending 20KB data every 5 seconds in JSOn format. The application will process and analyze the data and upload the results to dashboards in real time. Other application requirements will have, ability to apply real time analytics on the captured data, processing of captured data will be parallel and durable, the application must be scalable as per the requirement as the load varies and new sensors are added or removed at various facilities. The analytic processing results are stored in a persistent data storage for data mining. What combination of AWS services would be used for the above scenario?* * Use EMR to copy the data coming from Swipe machines into DynamoDB and make it available for analytics. * Use Amazon Kinesis Streams to ingest the Swipe data coming from sensors, Custom Kinesis Streams Applications will analyse the data, move analytics outcomes to RedShifts using AWS EMR * Utilize SQS to receive the data coming from sensors, use Kinesis Firehose to analyse the data from SQS, then save the results to a Multi-AZ RDS instance. * Use Amazon Kinesis Streams to ingest the sensor's data, custom Kinesis Streams applications will analyse the data, move analytics outcomes to RDS using AWS EMR.

*Use Amazon Kinesis Streams to ingest the Swipe data coming from sensors, Custom Kinesis Streams Applications will analyse the data, move analytics outcomes to RedShift using AWS EMR.* Option B is correct, as the Amazon Kinesis streams are used to read the data from thousands of sources like social media, survey based data .etc, and the kinesis streams can be used to analyse the data and can feed it using AWS EMR, to analytics based database like RedShift which works on OLAP. ----------------------------------- Option A is incorrect, EMR is not for receiving the real time data from thousands of sources, EMR is mainly used for Hadoop ecosystem based data used for Big data analysis. Option C is incorrect, SQS cannot be used to read the real time data from thousands of sources. Besides the Kinesis Firehose is used to ship the data to other AWS service not for the analysis. And finally RDS is again an OLTP based database. Option D is incorrect, as the AWS EMR can read large amounts of data, however RDS is a transactional database works based on the OLTP, thus cannot store the analytical data.

*A company plans to have their application hosted in AWS. This application has users uploading files and then public URL for downloading them at a later stage. Which of the following designs would help fulfill this requirement?* * Have EBS Volumes hosted on EC2 Instances to store the files. * Use Amazon S3 to host the files. * Use Amazon Glacier to host the files since this would e the cheapest storage option. * Use EBS Snapshots attached to EC2 Instances to store the files.

*Use Amazon S3 to host the files.* If you need storage for the Internet, AWS Simple Storage Service is the best option. Each uploaded file automatically gets a public URL, which can be used to download the file at a later point in time. ----------------------------------- Option A and D are incorrect because EBS Volumes or Snapshots do not have Public URL. Option C is incorrect because Glacier is mainly used for data archiving purposes.

*A company has an application hosted in AWS. This application consists of EC2 Instances which sit behind an ELB. The following are requirements from an administrative perspective: a) Ensure notifications are sent when the read requests go beyond 1000 requests per minute. b) Ensure notifications are sent when the latency goes beyond 10 seconds. c) Any API activity which calls for sensitive data should be monitored. Which of the following can be used to satisfy these requirements?* (Choose 2) * Use CloudTrail to monitor the API Activity. * Use CloudWatch logs to monitor the API Activity. * Use CloudWatch metrics for the metrics that need to be monitored as per the requirement and set up an alarm activity to send out notifications when the metric reaches the set threshold limit. * Use custom log software to monitor the latency and read requests to the ELB.

*Use CloudTrail to monitor the APi Activity.* *Use CloudWatch metrics for the metrics that need to be monitored as per the requirement and set up an alarm activity to send out notifications when the metric reaches the set threshold limit.* When you use CloudWatch metrics for an ELB, you can get the amount of read requests and latency out of the box. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the AWS API call the source IP address, the request parameters, and the response elements returned by the service. Use CloudWatch metrics for the metrics that needs to be monitored as per the requirement and set up an alarm activity to send out notifications when the metrics reaches the set threshold limit.

*Your company currently has a set of EC2 Instances hosted in AWS. The states of these instances need to be monitored and each state change needs to be recorded. Which of the following can help fulfill this requirement?* (Choose 2) * Use CloudWatch logs to store the state change of the instances. * Use CloudWatch Events to monitor the state change of the events. * Use SQS trigger a record to be added to a DynamoDB table. * Use AWS Lambda to store a change record in a DynamoDB table.

*Use CloudWatch logs to store the state change of the instances.* * Use CloudWatch Events to monitor the state change of the events.* Using CloudWatch events metrics we can monitor the changes in state for EC2 instances as given in the link. Using CloudWatch logs the changes in state for EC2 instances can be recorded as given in the link. ----------------------------------- Option C is incorrect as SQS cannot be used for monitoring. Option D is incorrect as AWS Lambda cannot be used for monitoring.

*A company is planning to run a number of Admin scripts using the AWS Lambda service. There is a need to detect errors that occur while the scripts run. How can this be accomplished in the most effective manner?* * Use CloudWatch metrics and logs to watch for errors. * Use CloudTrail to monitor for errors. * Use the AWS Config service to monitor for errors. * Use the AWS Inspector service to monitor for errors.

*Use CloudWatch metrics and logs to watch for errors.* AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.

*You are working as an AWS Administrator for a global IT company. Developer Team has developed a new intranet application for project delivery using AWS EC2 instance in us-west-2. Coding for this application is doen using Pyton with code size less than 5 MB & changes in code is done quarterly basis. This new application will be setup on new redundant infrastructure & company would like to automate this process. For deploying new features, AWS Codepipeline will be used for automated release cycle. Which of the following will you recommend as source stage & deploy stage integration along with AWS CodePipeline?* * Use CodePipeline with source stage as CodeCommit & deploy stage using AWS CodeDeploy * Use CodePipeline with source stage as CodeCommit & deploy stage using AWS Elastic Beanstalk. * Use CodePipeline with source stage as S3 bucket having versioning enable & deploy stage using AWS Elastic Beanstalk * Use CodePipeline with source stage as S3 bucket having versioning enable & deploy stage using AWS Code Deploy

*Use CodePipeline with source stage as CodeCommit & deploy stage using AWS Elastic Beanstalk.* As code size is less than 5 mb with a smaller number of changes to code, AWS CodeCommit can be used as source stage integration with AWS CodePipeline. Also, a new infrastructure needs to be built for this new application deployment. AWS Elastic Beanstalk can be used to build & manage redundant resources. ----------------------------------- Option A is incorrect, as there are no existing infrastructure & a new resource needs to be deployed. AWS Code Deploy is not a correct option. Option C is incorrect as Code size is less than 5MB with a smaller number of changes. S3 would not be a correct option. Option D is incorrect, as there are no existing infrastructure & a new resource needs to be deployed. AWS Code Deploy is not a correct option. Also, code size is less than 5 MB with a smaller number of changes. S3 would not be a correct option.

*You are a startup company that is releasing its first iteration of its app. Your company doesn't have a directory service for its intended users but wants the users to be able to sign in and use the app. What is your advice to your leadership to implement a solution quickly?* * Use AWS Cognito although it only supports social identity proviers like Facebook. * Let each user create an AWS user account to be managed via IAM. * Invest heavily in Microsoft Active Directory as it's the industry standard. * Use Cognito Identity along with a User Pool to securely save user's profile attributes.

*Use Cognito Identity along with a User Pool to securely save user's profile attributes.* ----------------------------------- Option A is incorrect, Cognito supports more than just social identity providers, including OIDC, SAML, and its own identity pools. Option B is isn't an efficient means of managing user authentication. Option C isn't the most efficient means to authenticate and save user information.

*A company has setup an application in AWS that interacts with DYnamoDB. It is required that when an item is modified in a DynamoDB table, an immediate entry is made to the associating application. How can this a accomplished?* (Choose 2) * Setup CloudWatch to monitor the DynamoDB table for changes. Then trigger a Lambda function to send the changes to the application. * Setup CloudWatch logs to monitor the DynamoDB table for changes. Then trigger AWS SQS to send the changes to the application. * Use DynamoDB streams to monitor the changes to the DynamoDB table. * Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified.

*Use DynamoDB streams to monitor the changes to the DynamoDB table.* *Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified.* When you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement is to have an immediate entry made to an application in case an item in the DynamoDB table is modified, a lambda function is also required. Example: Consider a mobile gaming app that writes to a GameScores table. Whenever the top score of the GameScores table is updated, a corresponding stream record is written to the table's stream. This event could then triggers Lambda function that posts a Congratulatory message on a Social media network handle. DynamoDB streams can be used to monitor the changes to a DynamoDB table. A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. *Note:* DynamoDB is integrated with Lambda so that you can triggers to events in DynamoDB streams. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement states that an item modified in a DynamoDB table causes an immediate entry to an associating application, a lambda function is also required.

*You want to host a static website on AWS. As a solutions architect, you have been given a task to establish a serverless architecture for that. Which of the following could be include in the proposed architecture?* (Choose 2) * Use DynamoDB to store data in tables. * Use EC2 to host the data on EBS Volume. * Use the simple Storage to store data. * use AWS RDS to store the data.

*Use DynamoDB to store data in tables.* *Use the Simple Storage Service to store data.* Both the Simple Storage Service and DynamoDB are complete serverless offering from AWS which you don't need to maintain servers, and your application have automated high availability.

*You have created multiple VPCs in AWS for different business units. Each business unit is running its resources within its own VPC. The data needs to be moved from one VPC to other. How do you securly connect two VPCs?* * Use a VPN connection. * Use Direct Connect. * Use VPC peering. * Nothing needs to be done. Since both VPCs are inside the same account, they can talk to each other.

*Use VPC peering.* Using VPC peering, you can securly connect two VPCs. ------------------------- VPN and Direct Connect are used to connect an on-premise data center with AWS. Even if the VPCs are part of the same account, they can't talk to each other unless peered.

*A company has a set of EBS Volumes that need to be catered in case of a disaster. How will you achieve this using existing AWS service effectively?* * Creaet a script to copy the EBS Volume to another Availability Zones. * Create a script to copy the EBS Volume to another region. * Use EBS Snapshots to create the volumes in another region. * Use EBS Snapshots to create the volumes in another Availability Zone.

*Use EBS Snapshots to create the volumes in another region.* A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create a new volumes in the same region. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. *Note:* It's not possible to provide each and every step in the Option and moreover in AWS exam. Also, you will see these kinds of Option. Option C is not talking about the whole procedure, it's simply giving the idea that we can use snapshots to create the volumes in the other region. Tha's the reason we also provided the explanation part to understand the concept. Catered means - provisioning. ----------------------------------- Option A and B are incorrect, because you can't directly copy EBS Volumes. Option D is incorrect because disaster recovery always looks at ensuring resources are created in another region.

*A company is planning to use Docker containers and necessary container orchestration tools for their batch processing requirements. There is a requirement for batch processing for both critical and non-critical data. Which of the following is the best implementation step for this requirement, to ensure that cost is effectively managed?* *Use Kubernetes for container orchestration and Reserved instances for all underlying instances. * Use ECS orchestration and Reserved Instances for all underlying instances. * Use Docker for container orchestration and a combination of Spot and Reserved Instances for the underlying instances. * Use ECS for container orchestration and a combination of Spot and Reserved Instances for the underlying instances.

*Use ECS for container orchestration and a combination of Spot and Reserved Instances for the underlying instances.* The Elastic Container service from AWS can be used for container orchestration. Since there are both critical and non-critical loads, one can use Spot instances for the non-critical workloads for ensuring cost is kept at a minimum.

*You are working for a financial institution using AWS cloud infrastructure. All project related data is uploaded to Amazon EFS. This data is retrieved from on-premises data centre connecting to VPC via AWS Direct Connect. You need to ensure that all client access to EFS is encrypted using TLS 1.2 to adhere to latest security guidelines issued by the security team. Which of the following is cost effective recommended practices for securing data in transit while accessing data from Amazon EFS?* * Use EFS mount helper to encrypt data in transit. * Use stunnel to connect to Amazon EFS & encrypt traffic in transit. * Use third-party tool to encrypt data in transit. * Use NFS client to encrypt data in transit.

*Use EFS mount helper to encrypt data in transit* While mounting Amazon EFS, if encryption of data in transit is enabled, EFS Mount helper initialise client Stunnel process to encrypt data in transit. EFS Mount helper uses TLS 1.2 to encrypt data in transit. ----------------------------------- * Option B is incorrect as using stunnel for encryption of data in transit will work fine, but there would be additional admin work to download & install stunnel for each mount. * Option C is incorrect as using third-party tool will be costly option. * Option D is incorrect as NFS client can't be used to encrypt data in transit. The amazon-efs-utils package can be used which consist of EFS mount helper.

*An application team needs to quickly provision a development environment consisting of a web and database layer. Which of the following would be the quickest and most ideal way to get this setup in place?* * Create a Spot Instances and install the Web and database components. * Create Reserved Instances and install the web and database components. * Use AWS Lambda to create the web components and AWS RDS for the database layer. * Use Elastic Beanstalk to quickly provision the environment.

*Use Elastic Beanstalk to quickly provision the environment.* With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management coplexity without restricting choice or control. You simply unload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. It does support RDS. AWS Elastic Beanstalk provides connection information to your instances by setting environment properties fro the database hostname, username, password, table name, and port. When you add a database to your environment, its lifecycle is tied to your environment's. ----------------------------------- Option A is incorrect, Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts compared to On-Demand prices. Option B is incorrect, a Reserved Instances is a resolution of resources and capacity, for either on or three years, for a particular Availability Zone within a region. Option C is incorrect, AWS Lambda is a compute service that makes it easy for you to build application that respond quickly to new information and not provisioning a new environment.

*You have a homegrown application that you want to deploy in the cloud. The application runs on multiple EC2 instances. During the batch job process, the EC2 servers create an output file that is often used as an input file by a different EC2 server. This output file needs to be stored in a shared file system so that all the EC2 servers can access it at the same time. How do you achieve this so that your performance is not impacted?* * Create an EBS volume and mount it across all the EC2 instances. * Use Amazon S3 for storing the output files. * Use S3-infrequent access for storing the output files. * Use Elastic File System for storing the output files.

*Use Elastic File System for storing the output files.* EFS provides the shared file system access. ------------------------- An EBS volume can't be mounted in more than one EC2 instance. S3 is an object store and can't be used for this purpose.

*There is a requirement for EC2 Instances in a private subnet to access an S3 bucket. It is required that the traffic does not traverse to the Internet. Which of the following can be used to fulfill this requirement?* * VPC Endpoint * NAT Instance * NAT Gateway * Internet Gateway

*VPC Endpoint* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connection connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other service does not leave the Amazon network.

*When managing permissions for the API Gateway, what can be used to ensure that the right level of permissions are given to Developers, IT Admins and users? These permissions should be easily managed.* * Use the secure token service to manage the permissions for different users. * Use IAM Policies to create different policies for different types of users. * Use AWS Config tool to manage the permissions for different users. * Use IAM Access Keys to create sets of keys for different types of users.

*Use IAM Policies to create different policies for different types of users.* You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes. - To create, deploy, and manage API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. - To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.

*An EC2 Instance host a Java based application that accesses a DynamoDB table. This EC2 Instance is currently serving production users. Which of the following is a secure way for the EC2 Instance to access the DynamoDB table?* * Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use KMS Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use IAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use IAM Access Groups with the right permission to interact with DynamoDB and assign it to the EC2 Instance.

*Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance.* To ensure secure access to AWS resources from EC2 Instances, always assign a role to the role to the EC2 Instance. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user. You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. *Note:* You can attach IAM role to the existing EC2 instance.

*The application you are going to deploy on the EC2 servers is going to call APIs. How do you securely pass the credentials to your application?* * Use IAM roles for the EC2 instance. * Keep the API credentials in S3. * Keep the API credentials in DynamoDB. * Embed the API credentials in your application JAR files.

*Use IAM roles for the EC2 instance.* IAM roles allow you to securely pass the credentials from one service to the other. ------------------------- Even if you use S3 or DyanmoDB for storing the API credentials, you still need to connect EC2 with S3 or DynamoDB. How do you pass the credentials between EC2 and S3 or DyanmoDB? Embedding the API credentials in a JAR file is not secure.

*A customer wants to create EBS Volumes in AWS. The data on the volume is required to be encrypted at rest. How can this be achieved?* * Create an SSL Certificate and attach it to the EBS Volume. * Use KMS to generate encryption keys which can be used to encrypt the volume. * Use CloudFront in front of the EBS Volume to encrypt all requests. * Use EBS Snapshots to encrypt the requests.

*Use KMS to generate encryption keys which can be used to encrypt the volume.* When you create . avolume, you have the option to encrypt the volume using keys generated by the Key Management Service. ----------------------------------- Option A is incorrect since SSL helps to encrypt data in transit. Option C is incorrect because it also does not help in encrypting the data at rest. OPtion D is incorrect because the snapshot of an unencrypted volume is also unencrypted.

*You are creating a data lake, and one of your criteria you are looking for is faster performance. You are looking for an ability to transform the data directly during the ingestion process to save time. Which AWS service should you choose for this?* * Use Kinesis Analytics to transform the data. * Ingest the data in S3 and then load it in Redshift to transform it. * Ingest the data in S3 and then use EMR to transform it. * Use Kinesis Firehose.

*Use Kinesis Analytics to transform the data* Kinesis Analytics has the ability to transform the data during ingestion. ------------------------- Loading the data in S3 and then using EMR to transform it is going to take a lot of time. You are looking for faster performance. Redshift is the data warehouse solution. Kinesis Firehose can ingest the data, but does not have any ability to transform the data.

*You are creating a data lake, and one of the criteria you are looking for is faster performance. You are looking for an ability to transform the data directly during the ingestion process to save time. Which AWS service should you choose for this?* * Use Kinesis Analytics to transform the data. * Ingest the data in S3 and then load it in Redshift to transform it. * Ingest the data in S3 and then use EMR to transform it. * Use Kinesis Firehose.

*Use Kinesis Analytics to transform the data.* Kinesis Analytics has the ability to transform the data during ingestion. ------------------------- Loading the data in S3 and then using EMR to transform it is going to take a lot of time. You are looking for faster performance. Redshift is the data warehouse solution. Kinesis Firehose can ingest the data, but it does not have any ability to transform the data.

*You want to transform the data while it is coming in. What is the easiest way of doing this?* * Use Kinesis Data Analytics * Spin off an EMR cluster while the data is coming in * Install Hadoop on EC2 servers to do the processing * Transform the data in S3

*Use Kinesis Data Analytics* Using EC2 servers or Amazon EMR, you can transform the data, but that is not the easiest way to do it. S3 is just the data store; it does not have any transformation capabilities.

*You want to use S3 for the distribution of your software, but you want only authorized users to the download the software. What is the way to achieve this?* * Encrypt the S3 bucket. * Use a signed URL. * Restrict the access via CloudFront. * Use the IAM role to restrict the access.

*Use a signed URL.* When you move your static content to an S3 bucket, you can protect it from unauthorized access via CloudFront signed URLs. A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This is how signed URL works. The web server obtains temporary credentials to the S3 content. It creates a signed URL based on those credentials that allows access. It provides this link in the content returned (signed URL) to client, and this link is valid for a limited period of time. ----------------------------------- Encryption is different from access. You can't restrict user-level access via CloudFront. You can't do this via an IAM role.

*A company has an infrastructure that consists of machines which keep sending log information every 5 minutes. The number of these machines can run into thousands and it is required to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement?* * Use Kinesis Data Streams with S3 to take the logs and store them in S3 to further processing. * Launch an Elastic Beanstalk application to take the processing job of the logs. * Launch an EC2 instance for enough EBS volumes to consume the logs which can be used for further processing. * Use CloudTrail to store all the logs which can be analyzed at a later stage.

*Use Kinesis Data Streams with S3 to take the logs and store them in S3 for further processing* Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon ElasticSearch, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today.

*You have the requirement to ingest the data in real time. What product should you choose?* * Upload the data directly to S3 * Use S3 IA * Use S3 reduced redundancy * Use Kinesis Data Streams

*Use Kinesis Data Streams* You can use S3 for storing the data, but if the requirement is to ingest the data in real time, S3 is not the right solution.

*Your company has started hosting their data on AWS by using the Simple Storage Service. They are storing files which are downloaded by users on a frequent basis. After a duration of 3 months, the files need to be transferred to archive storage since they are not used beyond this point. Which of the following could be used to effectively manage this requirement?* * Transfer the file via scripts from S3 Glacier after a period of 3 months * Use Lifecycle policies to transfer the files onto Glacier after a period of 3 months * Use Lifecycle policies to transfer the files onto Cold HDD after a period of 3 months * Create a snapshot of the files in S3 after a period of 3 months

*Use Lifecycle policies to transfer the files onto Glacier after a period of 3 months.* To manage your objects so they are stored cost effectively throughout their lifecycle, configure their lifecycle. A *lifecycle configuration* is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: *Transition actions* - Define when objects transition to another storage class. *Expiration actions* - Define when objects expire. ----------------------------------- Option A is invalid since there is already the option of a lifecycle policies Option C is invalid since lifecycle policies are used to transfer to Glacier or S3-Infrequent Access Option D is invalid since snapshots are used for EBS volumes.

*A database is being hosted using the AWS RDS service. The database is to be made into a production database and is required to have high availability. Which of the following can be used to achieve this requirement?* * Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another region. * Use the Read Replica feature to create another instance of the DB in another region. * Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another Availability Zone. * Use the Read Replica feature to create another instance of the DB in another Availability Zone.

*Use Multi-AZ for the RD instance to ensure that a secondary database is created in another Availability Zone.* Amazon RDS Multi-AZ deployment provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). ----------------------------------- Option A is incorrect because the Multi-AZ feature allows for high availability across Availability Zones and not regions. Options B and D are incorrect because Read Replicas can be used to offload database reads. But if you want high availability then opt for the Multi-AZ feature.

*You are working as an AWS consultant for a banking institute. They have deployed a digital wallet platform for clients using multiple EC2 instances in us-east-1 region. Application establishes a secure encrypted connection between client & EC2 instance for each transaction using custom TCP port 5810.* *Due to increasing popularity of this digital wallet, they are observing load on backend servers resulting a delay in transaction. For security purpose, all client IP address accessing this application should be preserved & logged. Technical team of banking institute is looking for a solution which will address this delay & also proposed solution should be compatible with millions of transactions done simultaneously. Which of the following is a recommended option to meet this requirement?* * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with custom security policy consisting of protocols & ciphers. *Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & ciphers. * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & TCP port 5810. * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with custom security policy consisting of protocols & TCP port 5810.

*Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & ciphers.* Network Load Balancer can be used to terminate TLS connections instead of back end instance reducing load on this instance. With Network Load Balancers, millions of simultaneous sessions can be established with no impact on latency along with preserving client IP address. To Negotiate TLS connections with clients, NLB uses a security policy which consists of protocols & ciphers. ----------------------------------- Option A is incorrect as Network Load Balancers does not support custom security policy. Option C is incorrect as Network Load Balancers should consists of security policies comprising of Protocols & Ciphers. Option D is incorrect as Network Load Balancers does not support custom security policy as well as security policies should comprise of protocols & ciphers.

*A company is planning on a Facebook-type application where users will upload videos and images. These are going to be stored in S3. There is a concern that there could be an overwhelming number of read and write requests on the S3 bucket. Which of the following could be an implementation step to help ensure optimal performance on the underlying S3 storage bucket?* * Use a sequential ID for the prefix * Use a hexadecimal hash for the prefix * Use a hexadecimal hash for the suffix * Use a sequential ID for the suffix

*Use a hexadecimal has for the prefix* This recommendation for increasing performance if you have a high request rate in S3 is given in the AWS documentation. *Note:* First of all, Question doesn't mention about the "request rate of read and write(exact number)" and AWS mentioned in this same document as follows. AWS says "Applications running on Amazon S3 today will enjoy this performance improvement with no changes, and customers building new applications on S3 do not have to make any application customizations to achieve this performance. Amazon S3's support for parallel requests means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application. *Performance scales per prefix*, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the *number of prefixes*.

*It is expected that only certain specified customers can upload images to the S3 bucket for a certain period of time. As an Architect what is your suggestion?* * Create a secondary S3 bucket. Then, use an AWS Lambda to sync the contents to primary bucket. * Use Pre-Signed URLs instead to upload the images. * Use ECS Containers to upload the images. * Upload the images to SQS and then push them to S3 bucket.

*Use Pre-Signed URLs instead to upload the images.* This question is basically based on the scenario where we can use pre-signed url. You need to understand about pre-signed url - which contains the user login credentials particular resources, such as S3 in this scenario. And user must have permission enabled that other application can use the credentials to upload the data (images) in S3 buckets. *AWS definition* A pre-signed URL gives you access to the object identified in theURL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permission to upload the object. All objects and buckets by default are private. The pre-signed URLs are useful if you want user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions. When you create a pre-signed URL, you must provide your security credentials and then specify a bucket name, an object key, and HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration. ----------------------------------- Option A is incorrect, since Amazon has provided us with an inbuilt function for this requirement, using this option is cost expensive and time-consuming. As a Solution Architect, you are supposed to pick the best and cost-effective solution. Option C is incorrect, ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Option D is incorrect, SQS is a message queue service used by distributed applications to exchange messages through a polling model and not through a push mechanism.

*I am running an Oracle database that is very I/O intense. My database administrator needs a minimum of 3,600 IOPS. If my system is not able to meet that number, my application won't perform optimally. How can I make sure my application always performs optimally?* * Use Elastic File System since it automatically handles the performance * Use Provisioned IOPS SSD to meet the IOPS number * Use your database files in an SSD-based EBS volume and your other files in an HDD-based EBS volume * Use a general-purpose SSD under a terabyte that has a burst capability

*Use Provisioned IOPS SSD to meet the IOPS number* If your workload needs a certain number of workload, then the best way would be is to use a Provisioned IOPS. That way, you can ensure the application or the workload always meets the performance metric you are looking for.

*You are designing an architecture on AWS with disaster recovery in mind. Currently the architecture consists of an ELB and underling EC2 Instances in the primary and secondary region. How can you establish a switchover in case of failure in the primary region?* * Use Route 53 Health Checks and then do a failover. * Use CloudWatch metrics to detect the failure and then do a failover. * Use scripts to scan CloudWatch logs to detect the failure and then do a failover.

*Use Route 53 Health Checks and then do a failover.* If you have multple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Route 53 can route traffic to other web server.

*You have a set of on-premises virtual machines used to serve a web-based application. You need to ensure that a virtual machine if unhealthy is taken out of the rotation. Which of the following option can be used for health checking and DNS failover features for a web application running behind ELB, to increase redundancy and availability.* * Use Route 53 health checks to monitor the endpoints. * Move the solution to AWS and use a Classic Load Balancer. * Move the solution to AWS and use an Application Load Balancer. * Move the solution to AWS and use a Network Load Balancer.

*Use Route 53 health checks to monitor the endpoints.* Route 53 health checks can be used for any endpoint that can be accessed via the Internet. Hence this would be ideal for monitoring endpoints. You can configure a health check that monitors an endpoint that you specify either by IP address or by the domain name. At regular intervals that you specify, Route 53 submits automated request over the internet to your application, server, or other resources to verify that it's reachable, available and functional. *Note:* Once enabled, Route 53 automatically configures and manages health checks for individual ELB nodes. Route 53 also takes advantage of the EC2 instance health checking that ELB performs. By combining the results of health checks of your EC2 instances and your ELB's, Route 53 DNS Failover is able to evaluate the health of the load balancer and the health of the application running on the EC2 instances behind it. In other words, if any part of the stack goes down, Route 53 detects the failure and routes traffic away from the failed endpoint. AWS documentation states, that you can create a Route 53 resource record that points to an address outside AWS, and you can fail over to any endpoint that you choose, regardless of location. For example, you may have a legacy application running in a datacenter outside AWS and a backup instance of that application running within AWS. You can set up health checks of your legacy application running outside AWS, and if the application fails the health checks, you can fail over automatically to the backup instance in AWS. *Note:* Route 53 has health checks in locations around the world. When you create a health check that monitors an endpoint, health checkers start to send requests to the endpoint that you specify to determine whether the endpoint is healthy. You can choose which locations you want Route 53 to use, and you can specify the interval between checks: every 10 seconds or every 30 seconds. Note that Route 53 health checkers in different data centers don't coordinate with one another, so you'll sometimes see several requests per second regardless of the interval you chose, followed by a few seconds with no health checks at all. Each health checker evaluates the health of the endpoint based on two values: - Response time Whether the endpoint responds to a number of consecutive health checks that you specify (the failure threshold) Route 53 aggregates the data from the health checkers and determines whether the endpoint is healthy: - If more than 18% of health checkers report that an endpoint is healthy, Route 53 considers it healthy. - If 18% of health checkers or fewer report that an endpoint is healthy, Route 53 considers it unhealthy. The response time that an individual health checker uses to determine whether an endpoint is healthy depends on the type of health checks: HTTP and HTTPS health checks. TCP health checks or HTTP and HTTPS health checks with string matching. Regardless your specific query where we are having more than 2 servers for the website, AWS docs states that... When you have more than one resource performing the same function-for example, more than one HTTP server or mail server-you can configure Amazon Route 53 to check the health of your resources and respond to DNS queries using only the healthy resources. For, example, suppose your website, exapmle.com is hosted on six servers, two each in three data centers around the world. You can configure Route 53 to check the health of those servers and to respond to DNS queries for example.com using only the servers that are currently healthy.

*A company website is set to launch in the upcoming weeks. There is a probability that the traffic will be quite high during the initial weeks. In the event of a load failure, how can you set up DNS failover to a static website?* * Duplicate the exact application architecture in another region and configure DNS Weigh-based routing. * Enable failover to an on-premise data center to the application hosted there. * Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution. * Add more severs in case the application fails.

*Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution.* Amazon Route 53 health checks monitor the health and performance of your web applications, web servers and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealth resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a CloudFront distribution.

*There is a website hosted in AWS that might get a lot of traffic over the next couple of weeks. If the application experiences a natural disaster at this time, which of the following can be used to reduce potential disruption to users?* * Use an ELB to divert traffic to an infrastructure hosted in another region. * Use an ELB to divert traffic to an Infrastructure hosted in another AZ. * Use CloudFormation to create backup resources in another AZ. * Use Route53 to route to a static web site.

*Use Route53 to route to a static web site.* In a disaster recovery scenario, the best choice out of all given options is to divert the traffic to a static website. The working "to reduce the potential disruption in cases of issues" is pointing to a disaster recovery situation. There is more than 1 way to manage this situation. However, we need to choose the best option from the list given here. Out of this, the most suitable one is Option D. Most organization try to implement High Availability (HA) instead of DR to guard them against any downtime of services. In case of HA, we ensure there exists a fallback mechanism for our services. The service that runs in HA is handled by hosts running in different availability zones but in the same geographical region. This approach, however, does not guarantee that our business will be up and running in case the entire region goes down. DR takes thing to a completely new level, wherein you need to be able to recover from a different region that's separated by over 250 miles. Our DR implementation is an Active/Passive model, meaning that we always have minimum critical services running in different regions, but a major part of the infrastructure is launched and restored when required. ----------------------------------- Option A is wrong because ELB can only balance traffic in one region and not across multiple regions. Option B and C are incorrect because using backups across AZ's is not enough for disaster recovery purposes. *Note:* Usually, when we discuss a disaster recovery scenario we assume that the entire region is affected due to some disaster. So we need the serviced to be provided from yet another region. So in that case setting up a solution in another AZ will not work as it is in the same region. Option A is incorrect though it mentions yet another region because ELB's cannot span across regions. So out of the options provided Option D is the suitable solution.

*You have a requirement to host a static website for a domain called mycompany.com in AWS. It is required to ensure that the traffic is distributed properly. How can this be achieved?* (Choose 2) * Host the static site on an EC2 Instance. * Use Route53 with static web site in S3. * Enter the Alias records from Route53 in the domain registrar. * Place the EC2 instance behind the ELB.

*Use Route53 with static web site in S3* *Enter the Alias records from Route53 in the domain registrar* You can host a static website in S3. You need to ensure that the nameserver records for the Route53 hosted zone are entered in your domain registrar.

*You have developed a new web application on AWS for a real estate firm. It has a web interface where real estate employees upload photos of new construction hoses in S3 buckets. Prospective buyer's login to these web site & access photos. Marketing Team has initiated an intensive marketing event to promote new housing schemes which we will lead to buyer's frequently accessing these images. As this is a new application you have no projection of traffic. You have created Auto Scaling across mulitple instance types for these web servers, but you also need to optimised cost for storage. You don't want to compromise on latency & all images should be downloaded instantaneously without any outage. Which of the following is recommeded storage solution to meet this requirement?* * Use One Zone-IA storage clas to store all images. * Use Standard-IA to store all images. * Use S3 Intelligent-Tiering storage class. * Use Standard storage class, use Storage class analytics to identify & move objects using lifecycle policies.

*Use S3 Intelligent-Tiering storage class.* When access pattern to web application using S3 storage buckets is unpredictable, you can use S3 intelligent-Tiering storage class. S3 Intelligent-Tiering storage class include two access tiers: frequent access and infrequent access. Based upon access patterns it moves data between these tiers which helps in cost saving. S3 Intelligent-Tiering storage class have the same performance as that of Standard storage class. ----------------------------------- Option A is incorrect as all though it will save cost, but it will not provide any protection in case of AZ failure. Also, this class is suitable for infrequenlty accessed data & not for frequentlya ccess data. Option B is incorrect as Standard-IA storage class is in for infrequently accessed data & there are retrieval charges associated. In above requirement you do not have any projections of data being access which may result in higher costs. Option D is inocrrect it has operational overhead to setup Storage class analytics & move objects betwen various classes. Also, since access pattern is undetermined, this will run into costlier option.

*A database hosted in AWS is currently encountering an extended number of write operations and is not able to handle the load. What can be done to the architecture to ensure that the write operations are not lost under any circumstances?* * Add more IOPS to the existing EBS Volume used by the database. * Consider using DynamoDB instead of AWS RDS. * Use SQS FIFO to queue the database writes. *Use SNS to send notification on missed database writes and then add them manually.

*Use SQS FIFO to queue the database writes.* SQS Queues can be used to store the pending database writes, and these writes can then be added to the database. It is the perfect queuing system for such architecture. Note that adding more IOPS may help the situation but will not totally eliminate the chances of losing database writes. *Note:* The scenario in the question is that the database is unable to handle the write operations and the requirement is that without losing any data we need to perform data writes on the database. For this requirement, we can use SQS queue to store the pending write requests, which will ensure the delivery of these messages. Increasing the IOPS can handle the traffic bit more efficiently but it has a limit of 40,000 IOPS where as SQS queues can handle 120,000 messages inflight.

*You have deployed an e-commerce application on EC2 servers. Sometimes the traffice becomes unpredictable when a large number of users log in to your web site at the same time. To handle this amount of traffic, you are using Auto Scaling and elastic load balancing along with EC2 servers. Despite that, sometimes you see that Auto Scaling is not able to quickly spin off additional servers. As a result, there is performance degradation, and your application is not able to capture all the orders properly. What can you do so that you don't lose any orders?* * Increase the size of EC2 instance. * Use SQS to decouple the ordering process. Keep the new orders in the queue and process them only when the new EC2 instance is available. * Increase the limit of EC2 instance. * Double the number of EC2 instances over what you are using today.

*Use SQS to decouple the ordering process. Keep the new orders in the queue and process them only when the new EC2 instance is available.* In this scenario, the main problem is you are losing the orders since the system is not able to process them in time. By using SQS and decoupling the order process, you won't lose any orders. ----------------------------------- Increasing the size of the EC2 instance or doubling the EC2 instance to begin with won't help because the number of users is unpredictabe. Limiting the EC2 instance is not a problem here since Auto Scaling is not able to quickly spin off additional servers.

*You plan on hosting the application on EC2 Instances which will be used to process logs. The application is not very critical and can resume operation even after an interruption. Which of the following steps can help provide a cost-effective solution?* *Use Reserved Instances for the underlying EC2 Instances. * Use Provisioned IOPS for the underlying EBS Volumes. * Use Spot Instances for the underlying EC2 Instances. * Use S3 as the underlying data layer.

*Use Spot Instances for the underlying EC2 Instances* Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

*An application needs to access data in another AWS account in another VPC in the same region. Which of the following can be used to ensure that the data can be accessed as required?* * Establish a NAT instance between both accounts. * Use a VPN between both accounts. * Use a NAT Gateway between both accounts. * Use VPC Peering between both accounts.

*Use VPC Peering between both accounts.* A VPC Peering connection is a networking connection between two VPCs that enables you to route traffic between privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. ----------------------------------- Option A and C are incorrect because these are used when private resources are required to access the Internet. Option B is incorrect because it's used to create a connection between the On-premises and and AWS resources.

*A company has a set of resources hosted in an AWS VPC. Having acquired another company with its own set of resources hosted in AWS, it is required to ensure that resources in the VPC of the parent company can access the resources in the VPC of the child company. How can this be accomplished?* * Establish a NAT Instance to establish communication acrosss VPCs. * Establish a NAT Gateway to establish communication across VPCs. * Use a VPN Connection to peer both VPCs. * Use VPC Peering to peer both VPCs.

*Use VPC Peering to peer both VPCs.* A VPC Peering Connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering Connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS region. ----------------------------------- NAT Instance, NAT Gateway and VPN do not allow for VPC-VPC connectivity.

*You are running your web site behind a fleet of EC2 servers. You have designed your architecture to leverage multiple AZs, and thus your fleet of EC2 servers is hosted on multiple EC2 servers across different AZs. You are using EFS to provide file system access to the EC2 serves and to store all the data. You have integrated an application load balancer with EC2 instances, and whenever someone logs in to your web site, the application load balancer redirects the traffic to one of the EC2 servers. You deliver lots of photos and videos via your web site, and each time a user requests a photo or video, it is served via the EC2 instance. You are thinking of providing a faster turnaround time to end users and want to improve the user experience. How can you improve the existing architecture?* * Move all the photos and videos to Amazon S3. * Use a CoudFront distribution to cache all the photos and videos. * Move all the photos and videos to Amazon Glacier. * Add more EC2 instances to the fleet.

*Use a CloudFront distribution to cache all the photos and videos. By leveraging Amazon CloudFront, you can cache all the photos and videos close to the end user, which will provide a better user experience. ------------------------- Since you are looking for faster performance and better user experience, moving the files to Amazon S3 won't help. Amazon Glacier is the archiving solution and can't be used for storing files, which is required in real time. If you add more EC2 servers, it can handle more concurrent users on the web site, but it won't necessarily provide the performance boost you are looking for.

*You have instances hosted in a private subnet in a VPC. There is a need for the instances to download updates from the Internet. As an architect, what change would you suggest to the IT Operations team which would also be the most efficient and secure?* * Create a new public subnet and move the instance to that subnet. * Create a new EC2 Instance to download the updates separately and then push them to the required instance. * Use a NAT Gateway to allow the instances in the private subnet to download the updates. * Create a VPC link to the Internet to allow the instances in the private subnet to download the updates.

*Use a NAT Gateway to allow the instances in the private subnet to download the updates.* The NAT Gateway is an ideal option to ensure that instances in the private subnet have the ability to download updates from the Internet. ----------------------------------- Option A is not suitable because there may be a security reason for keeping these instances in the private subnet. (for example: db instances) Option B is also incorrect. The instances in the private subnet may be running various applications and db instances. Hence, it is not advisable or practical for an EC2 Instance to download the updates separately and then push them to the required instance. Option D is incorrect because a VPC link is not used to connect to the Internet.

*Your infrastructure in AWS currently consists of a private and public subnet. The private subnet consists of database servers and the public subnet has a NAT Instance which helps the instances in the private subnet to communicate with the Internet. The NAT Instance is now becoming bottleneck. Which of the following changes to the current architecture can help prevent this issue from occuring in the future?* * Use a NAT Gateway instead of the NAT Instance. * Use another Internet Gateway for better bandwidth. * Use a VPC connection for better bandwidth. * Consider changing the instance type for the underlying NAT Instance.

*Use a NAT Gatway instead of the NAT instance.* The NAT Gateway is a managed resource which can be used in place of a NAT Instance. While you can consider changing the instance type for the underlying NAT Instance, this does not guarantee that the issue will not re-occur in the future.

*You are deploying a three-tier architecture in AWS. The web servers are going to reside in a private subnet, and the database and application servers are going to reside in public subnet. You have chosen two AZs for high availability; thus, you are going to have two web servers, one in each AZ; two application servers, one in each AZ; and an RDS database in master standby mode where the standby database is running on a different AZ. In addition, you are using a NAT instance so that the application server and the database server can connect to the Internet if needed. You have two load balancers: one external load balancer connected to the web server and one internal load balancer connected to the application servers. What can you do to eliminate the single point of failure in this architecture?* * Use two internal load balancers * Use two external load balancer * Use a NAT gateway * Use three AZs in this architecture

*Use a NAT gateway* In this scenario, only the NAT instance is the single point of failure; if it goes down, then the application server and the database server won't be able to connect to the Internet. For high availability, you can also launch NAT instances in each AZ. Using a NAT gateway is preferred over using multiple NAT instances since it is managed service and it scales on its own. When using a NAT gateway, there is no single point of failure. ------------------------- The internal and external load balancers are not single points of failure, and since you are already using two different AZs in your current architecture, you don't have a single point of failure today.

*You are running a Redshift cluster in the Virginia region. you need to create another Redshift cluster in the California Region for DR purposes. How do you quickly create the Redshift DR cluster?* * Export the data to AWS S3, enable cross-regional replication to S3, and import the data to the Redshift cluster in California. * Use AWS DMS to replicate the Redshift cluster from one region to the other. * Use a cross-region snapshot to create the DR cluster. * Extend the existing Redshift cluster to the California region.

*Use a cross-region snapshot to create the DR cluster.* Redshift provides the ability to create a cross-region snapshot. You can leverage it for creating the DR cluster in a different region. ------------------------- Exporting the data to S3 and then reimporting it to a Redshift cluster is going to add a lot of manual overhead. If the functionality is available natively via Redshift, then why do you even look at DMS? Redshift clusters are specific to an AZ; you can't extend a Redshift cluster beyond one AZ.

*You have a compliance requirement that you should own the entire physical hardware and no other customer should run any other instance on the physical hardware. What option should you choose?* * Put the hardware inside the VPC so that no other customer can use it * Use a dedicated instance * Reserve the EC2 for one year * Reserve the EC2 for three years

*Use a dedicated instance* You can create the instance inside a VPC, but that does not mean other customers can't create any other instance in the physical hardware. Creating a dedicated instance is going to provide exactly what you are looking for. Reserving the EC2 instance for the instance for one or three years won't help unless you reserve it as a dedicated instance.

*A million images are required to be uploaded to S3. What option ensures optimal performance in this case?* * Use a sequential ID for the prefix. * Use a hexadecimal hash for the prefix. * Use a hexadecimal hash for the suffix. * Use a sequential ID for the suffix.

*Use a hexadecimal hash for the prefix.* Amazon S3 maintains an index of object key names in each AWS region. Object keys are stored in UTF-8 binary ordering across multiple partitions in the index. The key name determines which partition the key is stored in. Using a sequential prefix, such as a timestamp or an alphabetical sequence, increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, which can overwhelm the I/O capacity of the partition. If your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By introducing randomness to your key names, the I/O load is distributed across multiple index partition. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key, and add three or four characters from the hash as prefix to the key name.

*You are creating an internal social network web site for all your employees via which they can collaborate, share videos and photos, and store documents. You are planning to store everything in S3. Your company has more than 100,000 employees today, so you are concerned that there might be a performance problem with S3 due to a large number of read and write requests in the S3 bucket based on the number of employees. What can you do to ensure optimal performance in S3 bucket?* * Use a hexadecimal hash string for the suffix. * Use a sequential ID for the prefix. * Use a sequential ID for the suffix. * Use a hexadecimal hash string for the prefix.

*Use a hexadecimal hash string for the prefix* Using a hexadecimal hash string as a prefix is going to introduce randomness to the key name, which is going to provide maximum performance benefits. ------------------------- Using a sequential prefix, such as a timestamp or an alphabetical sequence, increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O capacity of the partition.

*You are using an m1.small EC2 Instance with one 300GB EBS General purpose SSD volume to host a relational database. You determined that write throughput to the database need to be increased. Which of the following approaches can help you achieve this?* (Choose 2) * Use a larger EC2 Instance * Enable Multi-AZ feature for the database * Consider using Provisioned IOPS Volumes. * Put the database behind an Elastic Load Balancer

*Use a larger EC2 Instance* *Consider using Provisioned IOPS Volumes.* Highest performance SSD volume designed for latency-sensitive transactional workloads. ----------------------------------- Option B is incorrect since the Multi-AZ feature is only for high availability Option D is incorrect since this would not alleviate the high number of writes of the database.

*A company wants to host a web application and a database layer in AWS. This will be done with the use of subnets in a VPC. Which of the following is a proper architectural design for supporting the required tiers of the application?* * Use a public subnet for the web tier and a public subnet for the database layer. * Use a public subnet for the web tier and a private subnet for the database layer. * Use a private subnet for the web tier and a private subnet for the database layer. * Use a private subnet for the web tier and a public subnet for the database layer.

*Use a public subnet for the web tier and a private subnet for the database layer.* The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet.

*You have developed an application, and it is running on EC2 servers. The application needs to run 24/7 throughout the year. The application is critical for business; therefore, the performance can't slow down. At the same time, you are looking for the best possible way to optimize your costs. What should be your approach?* * Use EC2 via the on-demand functionality, and shut down the EC2 instance at night when no one is using it. * Use a reserved instance for one year. * Use EC2 via the on-demand functionality, and shut down the EC2 instance on the weekends. * Use a spot instance to get maximum pricing advantage.

*Use a reserved instance for one year.* Since you know that the application is going to run 24/7 throughout the year, you should choose a reserved instance, which will provide you with the cheapest cost. ------------------------- Since the business needs to run 24/7, you can't shut down at night or on the weekend. You can't use a spot instance because if someone overbids you, the instance will be taken away.

*You want to use S3 for the distribution of your software, but you want only authorized users to download the software. What is the best way to achieve this?* * Encrypt the S3 bucket. * Use a signed URL. * Restrict the access via CloudFront. * Use the IAM role to restrict the access.

*Use a signed URL* When you move your static content to an S3 bucket, you can protect it from unauthorized access via CloudFront signed URLs. A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This is how the signed URL works. The web server obtains a temporary credentials to the S3 content. It creates a signed URL based on those credentials that allows access. It provides this link in the content returned (signed URL) to client, and this link is valid for a limited period of time. ------------------------- Encryption is different from access. You can't restrict user-level access via CloudFront. You can't do this via an IAM role.

*You are running your mission-critical production database using a multi-AZ architecture in Amazon RDS with PIOPS. You are going to add some new functionality. What can you do to minimize the cost?* (Choose two.) * Use a single AZ. * Use multiple AZs. * Don't use PIOPS. * Create a read replica and give the developers the read replica.

*Use a single AZ.* *Don't use PIOPS.* Since you need the database for development purposes, you don't need built-in high availability and provisioned IOPS. Therefore, you can skip multi-AZs and PIOPS. ----------------------------------- Using multiple AZs is going to increase your costs. A read replica won't help since the read replica remains in read-only mode. The developer needs the database in read-write mode.

*You are running some tests for your homegrown application. Once the testing is done, you will have a better idea of the type of server needed to host the application. Each test case runs typically for 45 minutes and can be restarted if the server goes down. What is the cost-optimized way of running these tests?* * Use a spot instance for running the test. * Use an on-demand instance with a magnetic drive for running the test. * Use an on-demand instance EC2 instance with PIOPS for running the test. * Use an on-demand instance with an EBS volume for running the test.

*Use a spot instance for running the test.* You are looking to optimize the cost, and a spot instance is going to provide the maximum cost benefit. The question also says that each test case runs for 45 minutes and can be restarted if failed which is again a great candidate for a spot instance. ------------------------- If you use an on-demand instance with magnetic drive or EBS or PIOPS for running the test, it is going to cost more.

*One of the leading entertainment channels is hosting an audition for a popular reality show in India. They have published an advertisement to upload images of the participants with certain criteria on their website which is hosted in AWS infrastructure. The business requirement of the channel states that the participants with green card entry should be given a priority and results for them will be released first. However, results for the rest of the users will be released at their (channel's) convenience. Being a popular reality show, the number of requests coming to website will increase before the deadline, therefore the solution needs to be scalable and cost effective. Also, the failure of any layer should not affect the other layer in a multitier environment, in the AWS infrastructure. The technical management has given you few guidelines about the architecture, and they want Web components should participants to upload the images on S3 bucket. However, the second component will process these images and store it back to the S3 bucket, by making entries to the database storage. As a solutions architect for the entertainment channel, how would you design the solution, considering the priority for the participants is maintained and data is processed and stored as per the requirement. * Use web component to get images and store them on S3 bucket. Have SQS service read these images with two SQS queues, green card entry queue and non-green card entry queue and EC2 instances with Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to Amazon RDS database. * Use web component to get the images and store them on S3 bucket. Have the SQS service read these images with two SQS queues one green card entry queue and non-green card entry queue and EC2 instances with Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to Amazon RedShift database. * Use web component to get the images and store them on S3 bucket. Have the SQS service read these images with two SQS queues both non-green card entry queues and the fleet of EC2 instances with Auto Scaling group, will poll these queues based on the flags of priority and process these images based on priority requirement and store them to another S3 bucket, making an entry to DynamoDB database. * Use web component to get the images and store them in S3 bucket. Have the SQS service read these images with two SQS queues one priority and another standard queues. A fleet of EC2 instances with the Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to the Amazon DynamoDB database.

*Use a web component to get the images and store them in S3 bucket. Have the SQS service read these images with two SQS queues one priority and another standard queue. A fleet of EC2 instances with the Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to the Amazon DynamoDB database. Option D is correct, as it suits all requirements mentioned to make the solution decoupled, means even if one web component fails the database component will always be up and processing these images by reading it from the SQS queues based on the priority queue for its green card participants and standard queue for the general participants and fleet of EC2 instances with AUTO scaling will be able to take up the load during peak time. And data will be stored in another S3 bucket making an entry to DynamoDB table. ----------------------------------- Option A is incorrect, though the solution provides a decoupling but the final metadata updates output cannot be inside the Amazon RDS as it holds the transactional data. Option B is incorrect, though the solution provides the decoupling as well as web component and processing part looks good but the final metadata entries cannot be made to the Amazon RedShift which is an analytics databases works on OLAP. Option C is incorrect, the solution works well with web component, however the SQS queues used are standard queue, though these standard queues has ability to process the data separately based on green card entry and normal participants but it does not ensure priority, as both queues will be read simultaneously, hence this will not serve the needed requirement. However, storing metadata to DynamoDB table will work fine.

*You're running a mission-critical application, and you are hosting the database for that application in RDS. Your IT team needs to access all the critical OS metrics every five seconds. What approach would you choose?* * Write a script to capture all the key metrics and schedule the script to run every five seconds using a cron job * Schedule a job every five seconds to capture the OS metrics * Use standard monitoring * Use advanced monitoring

*Use advanced monitoring* In RDS, you don't have access to OS, so you can't run a cron job. You can't capture the OS metrics by running a database job. Standard monitoring provides metrics for one minute.

*Your application needs a shared file system that can be accessed from multiple EC2 instances across different AZs. How would you provision it?* * Mount the EBS volume across multiple EC2 instances * Use an EFS instance and mount the EFS across multiple EC2 instances across multiple AZs * Access S3 from multiple EC2 instances * Use EBS with Provisioned IOPS

*Use an EFS instance and mount the EFS across multiple EC2 instances across multiple AZs* Use an EFS. The same EBS volume can't be mounted across multiple EC2 instances.

*Your developers are in the process of creating a new application for your business unit. The developers work only on weekdays. To save costs, you shut down the web server (EC2 server) on the weekend and again start them on Monday. Every Monday the developers face issues while connecting to the web server. The client via which they connect to the web server stores the IP address. Since the IP address changes every week, they need to reconfigure it. What can you do to fix the problem for developers? Since your main intention is saving money, you can't run the EC2 servers over the weekend.* * Use an EIP address with the web server. * Use an IPv6 address with the web server. * Use an ENI with the web server. * Create the web server in the private subnet.

*Use an EIP address with the web server.* When you assign an EIP, you get a static IP address. The developers can use this EIP to configure their client. ------------------------- An IPv6 IP address is also dynamic, and you don't have control over it. ENI is a network interface and has nothing to do with IP addresses. If you can create the web server in a private subnet, every time you shut it down, it is going to get a new IP address.

*You are running a couple of social media sites in AWS, and they are using databases hosted in multiple AZs via RDS MySQL. With the expansion, your users have started seeing degraded performance mainly with the database reads. What can you do to make sure you get the required performance?* (Choose two.) * Use an ElastiCache in-memory cache in each AZ hosting the database. * Create a read replica of RDS MySQL to offload read-only traffic. * Migrate the database to the largest size box available in RDS. * Migrate RDS MySQL to an EC2 server.

*Use an ElastiCace in-memory cache in each AZ hosting the database* *Creat a read replica of RDS MySQL to offload read-only traffic* Creating ElastiCache and a read replica is going to give you an additional perofmrance boost for the workload. ------------------------- Since the bottleneck is at the read-only traffic, adding a read replica or in-memory cache will solve the problem. If you have migrated the database to the largest possible box available in RDS but the problem occurs again, what do you do? In that case, you should add a read replica or in-memory cache. If you are running a database in RDS, you should not move that to EC2 since you can get a lot of operational benefits simply by hosting your database in RDS.

*You are running your application on a bunch of on-demand servers. On weekends you have to kick off a large batch job, and you are planning to add capacity. The batch job you are going to run over the weekend can be restarted if it fails. What is the best way to secure additional compute resources?* * Use the spot instance to add compute for the weekend * Use the on-demand instance to add compute for the weekend * Use the on-demand instance plus PIOPS storage for the weekend resource * Use the on-demand instance plus a general-purpose EBS volume for the weekend resource

*Use the spot instance to add compute for the weekend* Since you know the workload can be restarted from where it fails, the spot instance is going to provide you with the additional compute and pricing benefit as well. You can go with on-demand as well; the only thing is you have to pay a little bit more for on-demand than for the spot instance. You can choose a PIOPS or GP2 with the on-demand instance. If you choose PIOPS, you have to pay much more compared to all the other options.

*A company owns an API which currently gets 1000 requests per second. The company wants to host this in a cost effective manner using AWS. Which of the following solution is best suited for this?* * Use API Gateway with the backend services as it is. * Use the API Gateway along with AWS Lambda. * Use CloudFront Along with the API backend service as it is. * Use ElastiCache along with the API backend service as it is.

*Use the API Gateway along with the AWS Lambda* Since the company has full ownership of the API, the best solution would be to convert the code for the API and use it in a Lambda function. This can help save on costs, since in the case of Lambda you only pay for the time the function runs, and not for the infrastructure. Then, you can use the API Gateway with AWS Lambda function to scale accordingly. *Note:* With Lambda you do not have to provision your own instances. Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code running a web service front end, and monitoring and logging your code. AWS Lambda provides easy scaling and high availability to yoru code without additional effort on your part.

*Your company wants to enable encryption of services such as S3 and EBS volumes so that the data it maintains is encrypted at rest. They want to have complete control over the keys and the entire lifecycle around the keys. How can you accomplish this?* * Use the AWS CloudHSM * Use the KMS service * Enable S3 server-side encryption * Enable EBS Encryption with the default KMS keys

*Use the AWS CloudHSM* AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. ----------------------------------- Option B, C and D are incorrect since here the keys are maintained by AWS.

*A company needs to extend their storage infrastructure to the AWS Cloud. The storage needs to be available as iSCSI devices for on-premises application servers. Which of the following would be able to fulfill this requirement?* * Create a Glacier vault. Use a Glacier Connector and mount it as an iSCSI device. * Create an S3 bucket. Use S3 Connector and mount it as an iSCSI device. * Use the EFS file service and mount the different file systems to the on-premises servers. * Use the AWS Storage Gateway-cached volumes service.

*Use the AWS Storage Gateway-cached volume service.* By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.

*A company is planning to use AWS ECS service work with containers. There is a need for the least amount of adminstrative overhead while launching containers. How can this be achieved?* * Use the Fargate launch type in AWS ECS. * Use the EC2 launch type in AWS ECS. * Use the Auto Scaling launch type in AWS ECS. * Use the ELB launch type in AWS ECS.

*Use the Fargate launch type in AWS ECS.* The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you.

*Your company has a web application hosted in AWS that makes use of an Application Load Balancer. You need to ensure that the web application is protected from web-based attacks such as cross site scripting etc. Which of the following implementation steps can help protect web applications from common security threats from the outside world?* * Place a NAT instance in front of the web application to protect against attacks. * Use the WAF service in front of the web application. * Place a NAT gateway in front of the web application to protect against attacks. * Place the web application in front of a CDN service instead.

*Use the WAF service in front of the web application.* AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining csutomizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. ----------------------------------- Option A and C are incorrect because these are used to allow instances in your private subnet to communicate with the internet. Option D is incorrect since this is ideal for content distribution and good when you have DDos attacks, but the WAF should be used for concentrated types of web attacks.

*You are running a MySQL database in RDS, and you have been tasked with creating a disaster recovery architecture. What approach is easiest for creating the DR instance in a different region?* * Create an EC2 server in a different region and constantly replicate the database over there. * Create an RDS database in the other region and use third-party software to replicate the data across the database. * While installing the database, use multiple regions. This way, your database gets installed into multiple regions directly. * Use the cross-regional replication functionality of RDS. This will quickly spin off a read replica in a different region that can be used for disaster recovery.

*Use the cross-regional replication functionality of RDS. This will quickly spin off a read replica in a different region that can be used for disaster recovery.* You can achieve this by creating an EC2 server in a different region and replicating, but when your primary site is running on RDS, why not use RDS for the secondary site as well? You can use third-party software for replication, but when the functionality exists out of the box in RDS, why pay extra to any third party? You can't install a database using multiple regions out of the box.

*You have created an instance in EC2, and you want to connect to it. What should you do to log in to the system for the first time?* * Use the username/password combination to log in to the server * Use the key-pair combination (private and public keys) * Use your cell phone to get a text message for secure login * Log in via the root user

*Use the key-pair combination (private and public keys)* The first time you log in to an EC2 instance, you need the combination of the private and public keys. You won't be able to log in using a username and password or as a root user unless you have used the keys. You won't be able to use multifactor authentication until you configure it.

*Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of Spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How would you implement such a system?* * Use a DynamoDB table with an attribute defining the priority level. Transform instances will scan the table for tasks, sorting the results by priority level. * Use Route 53 latency-based routing to send high priority tasks to the closest transformation instances. * Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high prioirty queue; if there is no message, they poll the default priority queue. * Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

*Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.* The best way to use 2 SQS queues. Each queue can be polled separately. The high priority queue can be polled first.

*You are using Amazon RDS as a relational database for your web application in AWS. All your data stored in Amazon RDS is encrypted using AWS KMS. Encrypting this data is handled by a separate team of 4 users (User A, B, C, & D) in Secuirty Team. They have created 2 CMK's for encryption of data. During annual Audit, there were concerns raised by Auditors for access to these CMK's for each user. Security Team has the following IAM Policy & Key Policy set for AWS KMS.* * -CMK1 is created by AWS KMS API & has a default Key policy.* * -CMK2 is default key policy created by AWS Mananagement console & allows User D.* * -User C has IAM Policy denying all action for CMK1 while allowing for CMK2.* * -User A & User B has IAM Policy allowing access to CMK1 while denying access to CMK2.* * -User D has IAM policy allowing full access to AWS KMS.* *Which of the following is correct statement for access each user has for AWS KMS CMK?* * User A & B can use the only CMK1, user C cannot use CMK1, while user D can use both CMK1 & CMK2. * User A & B can use CMK1 & CMK2, user C can use only CMK2, while user D can use both CMK1 & CMK2. * User A & B can use CMK1, user C can use CMK1 & CMK2, while user D can use both CMK1 & CMK2. * User A & B can use only CMK1, user C can use only CMK2, while user D cannot use both CMK1 & CMK2

*User A & B can use the only CMK1, user C cannot use CMK1, while user D can use both CMK1, & CMK2.* Access to AWS KMS CMK is a combination of both Key policy & IAM policy. IAM Policy should grant access to a user for AWS KMS. While Key Policy is used to control access to CMK in AWS KMS. ----------------------------------- Option B is incorrect as CMK2 key policy do not grant access to User C. Also, User A & B do not have IAM policy to access CMK2. Option C is incorrect as CMK2 key policy do not grant access to User C. Also, it does not have IAM policy to acces CMK1. Option D is incorrect as User D has IAM policy & Key Policy to use both CMK1 & CMK2.

*For implementing security features, which of the following would you choose?* (Choose 2) * Username/password * MFA * Using multiple S3 buckets * Login using the root user

*Username/password* *MFA* Using multiple buckets won't help in terms of security. Similarly, leveraging multiple regions won't help to address the security.

*Your company runs an automotible reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires a new instances of the Auto Scaling group to identify public and private IP addresses. You need to inform the development team on how they can achieve this. Which of the following advice would you give to the development team?* * By using Ipconfig for windows or Ifconfig for Linux. * By using a cloud watch metric. * Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data * Using a Curl or Get Command to get the latest user-data from http://169.254.169.254/latest/user-data

*Using a Curl or Get Command to get the latest meta-data from http://169.254.169/254/meta-data/* ----------------------------------- Option A is partially correct, but is an overhead when you already have the service runnin in AWS. Option B is incorrect, because you cannot get the IP address from the cloudwatch metric. Option D is incorrect, because user-data cannot get the IP addresses.

*Your company is doing business in North America, and all your customers are based in the United States and Canada. You are using us-east as a primary region and using the us-west region for disaster recovery. You have a VPC in both regions for hosting all the applications supporting the business. On weekends you are seeing a sudden spike in traffic from China. While going through the log files, you find out that users from China are scanning the open ports to your server. How do you restrict the user from China from connecting to your VPC?* * Using a VPC endpoint * Use CloudTrail * Using security groups * Using a network access control list

*Using a network access control list* You can explicitly deny the traffic from a particular IP address or from a CIDR block via an NACL ----------------------------------- You can't explicitly deny traffic using security groups. A VPC endpoint is used to communicate privately between a VPC and services such as S3 and DynamoDB. Using CloudTrail, you can find the trail of API activities, but you can't block any traffic.

*You need to restore an object from Glacier class in S3. Which of the following will help you do that?* (Choose 2) * Using the AWS S3 Console * Using the S3 REST API * Using the Glacier API * Using the S3 subcommand from AWS CLI

*Using the AWS S3 Console* *Using the S3 REST API* When discussing GLACIER it is important to distinguish between the storage-class 'Glacier' use by S3, and the 'S3-Glacier' service. The 1st is managed via the 'S3' console & API, and the 2nd the 'S3-Glacier' console & API. The Amazon 'S3' service maintains the mapping between your user-defined object name and Amazon Glacier system-defined identifier. These objects are not accessible via the 'S3-Glacier' service. Objects that are stored using the 'S3-Glacier' service are only accessible through the Amazon 'S3' service console or APIs.

*A customer wants to import their existing virtual machines to the cloud. Which service can they use for this?* * VM Import/Export * AWS Import/Export * AWS Storage Gateway * DB Migration Service

*VM Import/Export* VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2. Few strategies used for migrations are: 1. Forklift migration strategy 2. Hybrid migration strategy 3. Creating AMIs *AWS Import/Export* - is a data transport service used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport. *AWS Storage Gateway* - connects an on-premises software applicance with cloud-based storage to provide seamless integration with data security features between on-premises IT environment and the AWS storage infrastrcture. The gateway provides access to objects in S3 as files or file share mount points. *DB Migration Service* - can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneious migrations between different database platforms, such as Oracle to Amazon Aurora.

*You have hosted lots of resources in the AWS cloud in a VPC. Your IT security team wants to monitor all the traffic from all the ntework interfaces in the VPC. Which service should you use to monitor this?* * CloudTrail * CloudWatch * VPC Flow Logs * EC2 instance server logs

*VPC Flow Logs* Flow Logs is used to monitor all the network traffic within a VPC. ------------------------- CloudTrail is used to monitor API actions, and CloudWatch used to monitor everything in general in the cloud. EC2 server logs won't provide the information you are looking for.

*There is a requirement to get the IP addresses for resources accessed in a private subnet. Which of the following can be used to fulfill this purpose?* * Trusted Advisor * VPC Flow Logs * Use CloudWatch metrics * Use CloudTrail

*VPC Flow Logs* VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. ----------------------------------- AWS Trusted Advisor is your customized cloud expert! It helps you observe best practices for the use of AWS by inspecting your AWS environment with an eye toward saving money, improving system performance and reliability, and closing security gaps. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Cloud watch Metric is mainly for used for performance metrics.

*A company has a set of resources hosted in a VPC on the AWS Cloud. The IT Security department has now mandated that all IP traffic from all network interfaces in the VPC be monitored. Which of the following would help suffice this requirement?* * Trusted Advisor * VPC Flow Logs * Use CloudWatch metrics * Use CloudTrail

*VPC Flow Logs* VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

*A company has a set of VPC's defined in AWS. They need to connect this to their on-premise network. They need to ensure that all data is encrypted in transit. Which of the following would you use to connect the VPC's to the on-premise networks?* * VPC Peering * VPN connections * AWS Direct Connect * Placement Groups

*VPN connections* By default, instances that you launch into an Amazon VPC can't communicate with your own (remote) network. You can enable access to your remote network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, and creating an AWS managed VPN connection. ----------------------------------- Option A is incorrect because this is used to connect multiple VPC's together. Option C is incorrect because this does not encrypt traffic in connections between AWS VPC's and the On-premises network. Option D is incorect because this is used for low latency access between EC2 instances.

*You want to connect your on-premise data center to AWS using VPN. What things do you need to initiate a VPN connection?* (Choose two.). * AWS Direct Connect * Virtual private gateway * Customer gateway * Internet gateway

*Virtual private gateway* *Customer gateway* A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection, whereas a customer gateway is the VPN concentrator at your on-premise data center. ------------------------- The question is asking for a solution for VPN. Direct Connect is not a VPN connection; rather, it is dedicated connectivity. An Internet gateway is used to provide Internet access.

*To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information. Select all correct statements about that.* (Choose 2) * If your Lambda function needs to access both VPC resources and public Internet, the VPC needs to have a NAT instance inside your VPC, you can use the Amazon VPC NAT gateway or you can use an Internet gateway attached to your VPC. * When you add VPC configuration to a Lambda function, it can only access resources in that VPC. However, you can specify multiple VPC using the VpcConfig parameter. Simply comma separate the VPC subnet and security group IDs. * AWS Lambda uses the provided VPC-specific configuration information to set up elastic network interfaces. Therefore, your Lambda function execution role must have permissions to create, describe and delete these. * AWS Lambda does also support connecting to resources within Dedicated Tenancy VPCs.

*When you add VPC configuration to a Lambda function, it can only access resources in that VPC. However, you can specify multiple VPC using the VpcConfig parameter. Simply comma separate the VPC subnet and security group IDs.* *AWS Lambda uses the provided VPC-specific configuration information to set up elastic network interfaces. Therefore, your Lambda function execution role must have permissions to create, describe and delete these.* ----------------------------------- AWS Lambda does not support connecting to resources within Dedicated Tenancy VPCS. If your Lambda function requires Internet access, you cannot use an Internet gateway attached to your VPC since that requires the ENI to have public IP addresses.

*You are architecting an internal-only application. How can you make sure the ELB does not have any Internet access?* * You detach the Internet gateway from the ELB. * You create the instances in the private subnet and hook up the ELB with that. * The VPC should not have any Internet gateway attached. * When you create the ELB from the console, you can define whether it is internal or external.

*When you create the ELB from the console, you can define whether it is internal or external.* You can't attach or detach an Internet gateway with ELB, even if you create the instances in private subnet, and if you create an external-facing ELB instance, it will have Internet connectivity. The same applies for VPC; even if you take an IG out of the VPC but create ELB as external facing, it will still have Internet connectivity.

*If an Amazon EBS volume is an additional partition (not the root volume), can I detach it without stopping the instance?* * No, you will need to stop the instance. * Yes, although it may take some time.

*Yes, although it may take some time.*

*You are hosting your critical e-commerce web site on a fleet of EC2 servers along with Auto Scaling. During the weekends you see a spike in the system, and therefore in your Auto Scaling group you have specified a minimum of 10 EC2 servers and a maximum of 40 EC2 servers. During the weekend you noticed some slowness in performance, and when you did some research, you found out that only 20 EC2 servers had been started. Auto Scaling is not able to scale beyond 20 EC2 instances. How do you fix the performance problem?* * Use 40 reserve instances. * Use 40 spot instances. * Use EC2 instances in a different region. * You are beyond the EC2 service limit. You need to increase the service limit.

*You are beyond the EC2 service limit. You need to increase the service limit.* An EC2 instance has a service limit of 20. When this limit is exceeded, you can't provision any new EC2 instance. You can log a support ticket to raise the limit. ------------------------- A reserve instance is not recommended when you don't know how much capacity is needed. A spot instance is not a great use case since you are running a critical e-commerce web site. If you use EC2 instances in a different region, then you will have a performance issue.

*Your company has a set of AWS RDS Instances. Your management has asked you to disable Automated backups to save on costs. When you disable automated backups for AWS RDS, what are you compromising on?* * Nothing, you are actually savings resources on aws * You are disabling the point-in-time recovery * Nothing really, you can still take manual backups * You cannot disable automated backups in RDS

*You are disabling the point-in-time recovery.* Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. You can set the backup retention period when you create a DB instance. If you don't set the backup retention period, Amazon RDS uses a default period retention period of one day. You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days.

*You are doing an audit for a company, and doing the audit, you find that the company has kept lots of log files in a public bucket in Amazon S3. You try to delete them, but you are unable to do it. What could be the reason for this?* * The log files in the buckets are encrypted * The versioning is enabled in the bucket * You are not the owner of the bucket; that's why you can't delete them * Only the employee of the company can delete the object

*You are not the owner of the bucket; that's why you can't delete them* You can't delete a bucket that you don't own. ------------------------- It doesn't matter whether objects in the bucket are encrypted or versioning is enabled. The files can be deleted by the owner of the bucket. Since you are not th eowner of the bucket, that's why you can't delete files. If a company has 100,000 employees, do you want all of them to be able to delete the objects? Of Course not.

*What is the common use case for a storage gateway?* (Choose two.) * You can integrate the on-premise environment with AWS. * You can use it to move data from on-premise to AWS. * You can create a private AWS cloud on your data center. * It can be used in liu of AWS Direct Connect.

*You can integrate the on-premise environment with AWS.* *You can use it to move data from on-premise to AWS.* You can use an AWS storage gateway to integrate on-premise environments with those running on AWS and use them to transfer data from on-premise to the AWS Cloud. ------------------------- AWS is a public cloud provider, and the AWS services can't be used in a private cloud model; that concept simply does not exist. AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1GB or 10GB fiber-optic Ethernet cable. With this connection in place, you can create virtual interfaces directly to the AWS cloud and Amazon VPC, bypassing Internet service providers in your network path.

*You are running an application on an EC2 server, and you are using the instance store to keep all the application data. One day you realize that all the data stored in the instance store is gone. What could be the reason for this?* (Choose two.) * You have rebooted the instance. * You have stopped the instance. * You have terminated the instance. * You have configured an ELB with the EC2 server.

*You have stopped the instance* * You have terminated the instance* When you store something in the instance store, the data remains in the instance for the instance's lifetime. When you stop the instance, that instance is gone. Similarly, when you terminate the instance, it is gone, and you lose all your data. ------------------------- Only when you reboot or restart the instance does the instance remain active, and thus all the data is stored. As a best practice, you should never store important data in the instance store and should always use the EBS volume to store the important data. It does not matter if you add or delete an ELB with the instance; it does not delete the data.

*To save costs, you shut down some of your EC2 servers over the weekend that were used to deploy your application. One Monday when you try the application, you realize that you are not able to start the application since all the data that was stored in the application is gone. What could be the reason for this?* * You forgot to connect to the EBS volume after restarting the EC2 servers on Monday. * There must be a failure in the EBS volume for all the EC2 servers. * You must have used the instance store for storing all the data. * Someone must have edited your /etc/fstab parameter file.

*You must have used the instance store for storing all the data.* Instance storage is ephemeral, and all the data in that storage is gone the moment you shut down the server. ----------------------------------- Even if you attached an EBS volume with an EC2 instance, once configured, there is no need to manually connect every time after the server restart. There can't be an EBS failure for all the EC2 servers. Changing a few parameters in /etc/fstab won't delete your data.

*You want to deploy a PCI-compliant application on AWS. You will be deploying your application on EC2 servers and will be using RDS to host your database. You have read that AWS services, which you are going to use, are PCI compliant. What steps do you need to take to make the application PCI compliant?* * Nothing. Since AWS is PCI compliant, you don't have to do anything. * Encrypt the database, which will make sure the application is PCI compliant. * Encrypt the database and EBS volume from the EC2 server. * You need to follow all the steps as per the PCI requirements, from the application and to the database,to make the application compliant.

*You need to follow all the steps as per the PCI requirements, from the application and to the database, to make the application compliant.* AWS follows the shared security model. In this model, AWS is responsible for the security in the cloud. Just putting your application on AWS won't make it PCI compliant. You need to do your part as well. ------------------------- You need to follow all the steps as per the PCI documentation to make your application PCI compliant; you can't just encrypt the database and application and be done with it.

*You have a production application that is on the largest RDS instance possible, and you are still approaching CPU utilization bottlenecks. You have implemented read replicas, ElastiCache, and even CloudFront and S3 to cache static assets, but you are still bottlenecking. What should your next troubleshooting step be?* * You should provision a secondary RDS instance and then implement and ELB to spread the load between the two RDS instances. * You should consider using RDS Multi-AZ and using the secondary AZ nodes as read only nodes to further offset load. * You have reached the limits of public cloud. You should get dedicated database server and host this locally within your own data center. * You should implement database partitioning and spread your data across multiple DB Instances.

*You should implement database partitioning and spread your data across multiple DB Instances.* If your application requires more compute resources than the largest DB instance class or more storage than the maximum allocation, you can implement partitioning, thereby spreading your data across multiple DB instances.

*From the command line, which of the following should you run to get the public hostname of an EC2 instance?* * curl http://254.169.254.169/latest/user/data/public-hostname * curl http://169.254.169.254/latest/meta-data/public-hostname * curl http://254.169.254.169/latest/meta-data/public-hostname * curl http://169.254.169.254/latest/user-data/public-hostname

*curl http://169.254.169.254/latest/meta-data/public-hostname* You would use the command: curl http://169.254.169.254/latest/meta-data/public-hostname

*You are running a mission-critical three-tier application on AWS and have enabled Amazon CloudWatch metrics for a one-minute data point. How far back you can go and see the metrics?* * One Week * 24 hours * One month * 15 days

15 days * When CloudWatch is enabled for a one-minute data point, the retention is 15 days.

*If you are using Amazon RDS Provisioned IOPS storage with a Microsoft SQL Server database engine, what is the maximum size RDS volume you can have by default?* * 1TB * 32TB * 16TB * 6TB * 500GB

16TB

*MySQL installations default to port number ________.* * 3306 * 1433 * 3389 * 80

3306

*In RDS, what is the maximum value I can set for my backup retention period?* * 35 Days * 15 Days * 45 Days * 30 Days

35 Days

*What workloads can you deploy using Elastic Beanstalk?* (Choose two.) * A static web site * Storing data for data lake for big data processing * A long-running job that runs overnight * A web application

A static web site, A web application * You can deploy a web application or a static web site using Elastic Beanstalk Elastic Beanstalk can't be used for storing a data lake's data. If you have a long-running job that runs overnight, you can use AWS Batch.

*In the past, someone made some changes to your security group, and as a result an instance is not accessible by the users for some time. This resulted in nasty downtime for the application. You are looking to find out what change has been made in the system, and you want to track it. Which AWS service are you going to use for this?* * AWS Config * Amazon CloudWatch * AWS CloudTrail * AWS Trusted Advisor

AWS Config * AWS Config maintains the configuration of the system and helps you identify what change was made in it.

*You are a developer and want to deploy your application in AWS. You don't have an infrastructure background and you are not sure about how to use infrastructure within AWS. You are looking for deploying your application in such a way that the insfrastructure scales on its own, and at the same time you don't have to deal with managing it. Which AWS service are you going to choose for this?* * AWS Config * AWS Lambda * AWS Elastic Beanstalk * Amazon EC2 servers and Auto Scaling

AWS Elastic Beanstalk * AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.

*Your company has more than 20 business units, and each business unit has its own account in AWS. Which AWS service would you choose to manage the billing across all the different AWS accounts?* * AWS Organizations * AWS Trusted Advisor * AWS Cost Advisor * AWS Billing Console

AWS Organizations * Using AWS Organizations, you can manage the billing from various AWS accounts.

*You are designing an e-commerce order management web site where your users can order different types of goods. You want to decouple the architecture and would like to separate the ordering process from shipping. Depending on the shipping priority, you want to separate queue running for standard shipping versus priority shipping. Which AWS service would you consider for this?* * AWS CloudWatch * AWS CloudWatch Events * AWS API Gateway * AWS SQS

AWS SQS * Using SQS, you can decouple the ordering and shipping processes, and you can create separate queues for the ordering and shipping processes.

*What is the AWS service you are going to use to monitor the service limit of your EC2 instance?* * EC2 dashboard * AWS Trusted Advisor * AWS CloudWatch * AWS Config

AWS Trusted Advisor * Using Trusted Advisor, you can monitor the service limits for the EC2 instance.

*Which AWS service is ideal for Business Intelligence Tools/Data Warehousing?* * Elastic Beanstalk * ElastiCache * DynamoDB * Redshift

Redshift

*You are hosting a MySQL database on the root volume of an EC2 instance. The database is using a large number of IOPS, and you need to increase the number of IOPS available to it. What should you do?* * Migrate the database to an S3 bucket. * Use Cloud Front to cache the database. * Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes. * Migrate the database to Glacier.

Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes.

*You are running a job in an EMR cluster, and the job is running for a long period of time. You want to add additional horsepower to your cluster, and at the same time you want make sure it is cost effective. What is the best way of solving this problem?* * Add more on-demand EC2 instances for your task node * Add more on-demand EC2 isntances for you code node * Add more spot instances for your task node * Add more reserved instances for your task node

Add more spot instaces for your task node * You can add more spot instances to your task node to finish the job early. Spot instances are the cheapest in cost, so this will make sure the solution is cost effective.

*You have created a customer subnet, but you forgot to add a route for Internet connectivity. As a result, all the web servers running in that subnet don't have any Internet access. How will you make sure all the web servers can access the Internet?* * Attach a virtual private gateway to the subnet for destination 0.0.0.0/0 * Attach an Internet gateway to the subnet for destination 0.0.0.0/0 * Attach an Internet gateway to the security group of EC2 instances for the destination 0.0.0.0/0 * Attached a VPC endpoint to the subnet

Attach an Internet gateway to the subnet for destination 0.0.0.0/0 You need to attach an Internet gateway so that the subnet can talk with the Internet. * A virtual private gateway is used to create a VPN connection. You cannot attach an Internet gateway to an EC2 instance. It has to be at the subnet level. A VPC endpoint is used so S3 or DynamoDB can communicate with Amazon VPC, bypassing the Internet.

*Your resources were running fine in AWS, and all of a sudden you notice that something has changed. Your cloud security team told you that some API has changed the state of your resources that were running fine earlier. How do you track who has created the mistake?* * By writing a Lambda function, you can find who has changed what * By using AWS CloudTrail * By using Amazon CloudWatch Events * By using AWS Trusted Advisor

By using AWS CloudTrail * Using AWS CloudTrail, you can find out who has changed what via API.

*You are running all your AWS resrouces in the US-East region, and you are not leveraging a second region using AWS. However, you want to keep your infrastructure as code so that you should be able to fail over to a different region if any DR happens. Which AWS service will you choose to provision the resources in a second region that looks identical to your resources in the US-East region?* * Amazon EC2, VPC, and RDS * Elastic Beanstalk * OpsWorks * CloudFormation

CloudFormation * Using CloudFormation, you can keep the infrastructure as code, and you can create a CloudFormation template to mimic the setup in an existing region and can deploy the CloudFormation template in a different region to create the resource.

*Recently you had a big outage on your web site because of a DDoS attack, and you lost a big chunk of revenue since your application was down for some time. You don't want to take any chances and would like to secure your web site from any future DDoS attacks. Which AWS service can help you achieve your goal?* (Choose two.) * CloudFront * Web Application Firewall * Config * Trusted Advisor

CloudFront, Web Application Firewall * CloudFront can be integrated with WAF, which can protect against a DDoS attack. Config helps you monitor the change in state of existing AWS resources, which has nothing to do with DDoS attacks. Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment.

*You are running a three-tier web application on AWS. Currently you don't have a reporting system in place, and your application owner is asking you to add a reporting tier to the application. You are concerned that if you add a reporting layer on top of the existing database, the performance is going to be degraded. Today you are using a multi-AZ RDS MySQL to host the database. What can you do to add the reporting layer at the same time making sure there is no performance impact to the existing application?* * Use the Database Migration Service and move the database to a different region and run reporting form there. * Create a read replica of the RDS MySQL and run reporting from the read replica. * Export the data from RDS MySQL to S3 and run the reporting from S3. * Use the standby database for running the reporting.

Create a read replica of the RDS MySQL and run reporting from the read replica. * RDS MySQL provides the ability to create up to 15 read replicas to offload the read-only workload. This can be used for reporting purposes as well. -------------------- When the capability comes built into RDS, why use the Database Migration Serve? Exporting the data to S3 is going to be painful, and are you going to export the data everyday? The standby database is not open, so you can't run reporting from it.

*You are building a system to distribute training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3 but not publicly accessible from S3 directly?* * Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI * Add CloudFront account security group called "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy * Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM user * Create an S3 bucket policy that lists the CloudFront distribution ID as the principal and target bucket as the Amazon resrouce name (ARN)

Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OA

*Which of the following is most suitable for OLAP?* * Redshift * DynamoDB * RDS * ElastiCache

Redshift * Redshift would be the most suitable for online analytics processing.

*Which of the following AWS services is a non-relational database?* * Redshift * Elasticache * RDS * DynamoDB

DynamoDB

*If I wanted to run a database on an EC2 instance, which of the following storage options would Amazon recommend?* * S3 * EBS * Glacier * RDS

EBS

*Which of the following statements are true for an EBS volumes?* (Choose two.) * EBS replicates within its availability zones to protect your applications from component failure. * EBS replicates across different availability zones to protect your applications from component failure. * EBS replicates across different regions to protect your applications from component failure. * Amazon EBS volumes provide 99.999 percent availability

EBS replicates within its availability zones to protect your applications from component failure. Amazon EBS volumes provide 99.999 percent availability. * EBS cannot replicate to a different AZ or region.

*To protect S3 data from both accidental deletion and accidental overwriting, you should:* * Enable S3 versioning on the bucket * Access S3 data using only signed URLs * Disable S3 delete using an IAM bucket policy * Enable S3 reduced redundancy storage * Enable multifactor authentication (MFA) protected access

Enable S3 versioning on the bucket * Signed URLs won't help, even if you disable the ability to delete.

*If you want your application to check RDS for an error, have it look for an ______ node in the response from the Amazon RDS API.* * Abort * Incorrect * Error * Exit

Error

*When you add a rule to an RDS DB security group, you must specify a port number or protocol.* * True * False

False * Technically a destination port number is needed, however with a DB security group the RDS instance port number is automatically applied to the RDS DB Security Group.

*What happens to the I/O operations of a single-AZ RDS instance during a database snapshot or backup?* * I/O operations to the database are sent to a Secondary instance of a Multi-AZ installation (for the duration of the snapshot.) * I/O operations will function normally. * Nothing. * I/O may be briefly suspended while the backup process initializes (typically under a few seconds), and you may experience a brief period of elevated latency.

I/O may be briefly suspended while the backup process initializes (typically under a few seconds), and you may experience a brief period of elevated latency.

*Under what circumstances would I choose provisioned IOPS over standard storage when creating an RDS instance?* * If you use online transaction processing in your production environment. * If your business was trying to save money. * If this was a test DB. * If you have workloads that are not sensitive to latency/lag.

If you use online transaction processing in your production environment

*Amazon Glacier is designed for which of the following?* (Choose two.) * Active database storage * Infrequently accessed data * Data archives * Frequently accessed data * Cached session data

Infrequently accessed data, Data archives * Amazon Glacier is used for archival storage and for archival purposes.

*You have a huge amount of data to be ingested. You don't have a very stringent SLA for it. Which product should you use?* * Kinesis Data Streams * Kinesis Data Firehose * Kinesis Data Analytics * S3

Kinesis Data Firehose * Kinesis Data Streams is used for ingesting real-time data, and Kinesis Data Analytics is used for transformation. S3 is used to store the data.

*How do you protect access to and the use of the AWS account's root user credentials?* (Choose two.) * Never use the root user * Use Multi-Factor Authentication (MFA) along with the root user * Use the root user only for important operations * Lock the root user

Never use the root user. Use Multi-Factor Authentication (MFA) along with the root user * It is critical to keep the root user's credentials protected, and to this end, AWS recommends attaching MFA to the root user and locking the credentials with the MFA in a physically secured location. IAM allows you to create and manage other nonroot user permissions, as well as establishe access levels to the resources.

*An instance is launched into the public subnet of a VPC. Which of the following must be done for it to be accessible from the Internet?* * Attached an elastic IP to the instance. * Nothing. The instance is accesible from the Internet. * Launch a NAT gateway and route all traffic to it. * Make an entry in the route table, passing all traffic going outside the VPC in the NAT instance.

Nothing. The instance is accesible from the Internet. * Since the instance is created in the public subnet and an Internet gateway is already attached with a public subnet, you don't have to do anything explicitly.

*Your web application front end consists of multiple EC2 instance behind an elastic load balancer. You configured an elastic load balancer to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?* * The instance is replaced automatically by the elastic load balancer. * The instance gets terminated automatically by the elastic load balancer. * The ELB stops sending traffic to the instance that failed its health check. * The instance gets quarantined by the elastic load balancer for root-cause analysis.

The ELB stops sending traffic to the instance that failed its health check.

*Which of the following is not a feature of DynamoDB?* * The ability to store relational based data * The ability to perform operations by using a user-defined primary key * The primary key can either be a single-attribute or a composite * Data reads that are either eventually consistent or strongly consistent

The ability to store relational based data * DynamoDB is the AWS managed NoSQL database service. It has many features that are being added to constantly, but it is not an RDBMS service and therefore it will never have the ability to store relational data. All of the other options listed are valid features of DynamoDB.

*Which of the following will occur when an EC2 instance in a VPC with an associated elastic IP is stopped and started?* (Choose two.) * The elastic IP will be dissociated from the instance. * All data on instance-store devices will be lost. * All data on Elastic Block Store (EBS) devices will be lost. * The Elastic Network Interface (ENI) is detached. * The underlying host for the instance is changed.

The elastic IP will be dissociated from the instance. The Elastic Network Interface (ENI) is detached. * If you have any data in the instance store, that will be also be lost, but you should not choose this option since the question is regarding elastic IP.

*To execute code in AWS Lambda, what is the size the EC2 instance you need to provision in the back end?* * For code running less than one minute, use a T2 Micro. * For code running between one minute and three minutes, use M2. * For code running between three minutes and five minutes, use M2 large. * There is no need to provision an EC2 instance on the back end.

There is no need to provision an EC2 instance on the back end. * There is no need to provision EC2 servers since Lambda is serverless.

*What data transfer charge is incurred when replicating data from your primary RDS instance to your secondary RDS instance?* * There is no charge associated with this action. * The charge is half of the standard data transfer charge. * The charge is the same as the standard data transfer charge. * The charge is double the standard data transfer charge

There is no charge associated with this action.

*You just deployed a three-tier architecture in AWS. The web tier is a public subnet, and the application and database tiers are in a private subnet. You need to download some OS updates for the application. You want a permanent solution for this, which at the same time should be highly available. What is the best way to achieve this?* * Use an Internet gateway * Use a NAT gateway * Use a NAT instance * Use a VPC endpoint

Use a NAT gateway * A NAT gateway provides high availability. * A, C and D are incorrect. A NAT instance doesn't provide high availability. If you want to get high availability from a NAT instance you need multiple NAT instances, which adds to the cost. An Internet gateway is already attached to a public subnet; if you attached an Internet gateway to a private subnet, it no longer remains a private subnet. Finally, a VPC endpoint provides private connectivity from an AWS service to Amazon VPC.

*What is an important criterion when planning your network topology in AWS?* * Use both IPv4 and IPv6 addresses. * Use nonoverlapping IP addresses. * You should have the same IP address that you have on-premise. * Reserve as many EIP addresses as you can since IPv4 IP addresses are limited.

Use nonoverlapping IP addresses. * Using IPv4 or IPv6 depends on what you are trying to do. You can't have the same IP address, or when you integrate the application on-premise with the cloud, you will end up with overlapping IP addresses, and hence your application in the cloud won't be able to talk with the on-premise application. You should allocate only the number of EIPs you need. If you don't use an EIP and allocate it, you are going to incur a charge on it.

*Which of the following is correct?* * # of Regions > # of Availability Zones > # of Edge Locations * # of Availability Zones > # of Regions > # of Edge Locations * # of Availability Zones > # of Edge Locations > # of Regions * # of Edge Locations > # of Availability Zones > # of Regions

*# of Edge Locations > # of Availability Zones > # of Regions* The number of Edge Locations is greater than the number of Availability Zones, which is greater than the number of Regions.

*What is the minimum file size that I can store on S3?* * 1MB * 0 bytes * 1 byte * 1KB

*0 bytes*

*How many S3 buckets can I have per account by default?* * 20 * 10 * 50 * 100

*100*

How many S3 buckets can I have per account by default? * 10 * 100 * 50 * 20

*100*

*What is the availability of S3-OneZone-IA?* * 99.90% * 100% * 99.50% * 99.99%

*99.50%* OneZone-IA is only stored in one Zone. While it has the same Durability, it may be less available than normal S3 or S3-IA.

*What is the availability of objects stored in S3?* * 99.99% * 100% * 99.90% * 99%

*99.99%*

*What does an AWS Region consist of?* * A console that gives you quick, global picture of your cloud computing environment. * A collection of data centers that is spread evenly around a specific continent. * A collection of databases that can only be accessed from a specific geographic region. * A distinct location within a geographic area designed to provide high availability to specific geography.

*A distinct location within a geographic area designed to provide high availability to specific geography* Each region is separate geographic area. Each region has multiple, isolated locations known as Availability Zones.

*What is an AWS region?* * A region is an independent data center, located in different countries around the globe. * A region is a geographical area divided into Availability Zones. Each region contains at least two Availability Zones. * A region is a collection of Edge Locations available in specific countries. * A region is a subset of AWS technologies. For example, the Compute region consists of EC2, ECS, Lamda, etc.

*A region is a geographical area divided into Availability Zones.* Each region contains at least two Availability Zones.

*What is a way of connecting your data center with AWS?* * AWS Direct Connect * Optical fiber * Using an Infiniband cable * Using a popular Internet service from a vendor such as Comcast or AT&T

*AWS Direct Connect* ----------------------------------- Your colocation or MPLS provider may use an optical fiber or Infiniband cable behind the scenes. If you want to connect over the Internet, then you need a VPN.

*You want to deploy your applications in AWS, but you don't want to host them on any servers. Which service would you choose for doing this?* (Choose two.) * Amazon ElastiCache * AWS Lambda * Amazon API Gateway * Amazon EC2

*AWS Lambda* *Amazon API Gateway* ----------------------------------- Amazon ElastiCache is use to deploy Redis or Memcached protocol-compliant server nodes in the cloud, and Amazon EC2 is a server.

*Power User Access allows ________.* * Full Access to all AWS services and resources. * Users to inspect the source code of the AWS platform * Access to all AWS services except the management of groups and users within IAM. * Read Only access to all AWS services and resources.

*Access to all AWS services except the management of groups and users within IAM.*

*If you want to speed up the distribution of your static and dynamic web content such as HTML, CSS, image, and PHP files, which service would you consider?* * Amazon S3 * Amazon EC2 * Amazon Glacier * Amazon CloudFront

*Amazon CloudFront* ----------------------------------- Amazon S3 can be used to store objects; it can't speed up the operations. Amazon EC2 provides the compute. Amazon Glacier is the archive storage.

*If you want to run your relational database in the AWS cloud, which service would you choose?* * Amazon DynamoDB * Amazon Redshift * Amazon RDS * Amazon ElastiCache

*Amazon RDS* ----------------------------------- Amazon DynamoDB is a NoSQL offering, Amazon Redshift is a data warehouse offering, and Amazon ElastiCache is used to deploy Redis or Memcached protocol-compliant server nodes in the cloud.

*A company needs to have its object-based data stored on AWS. The initial size of data would be around 500 GB, with overall growth expected to go into 80TB over the next couple of months. The solution must also be durable. Which of the following would be an ideal storage option to use for such a requirement?* * DynamoDB * Amazon S3 * Amazon Aurora * Amazon Redshift

*Amazon S3* Amazon S3 is object storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3.

*You want to be notified for any failure happening in the cloud. Which service would you leverage for receiving the notifications?* * Amazon SNS * Amazon SQS * Amazon CloudWatch * AWS Config

*Amazon SNS* ----------------------------------- Amazon SQS is the queue service; Amazon CloudWatch is used to monitor cloud resources; and AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources.

*An application is going to be developed using AWS. The application needs a storage layer to store important documents. Which of the following option is incorrect to fulfill this requirement?* * Amazon S3 * Amazon EBS * Amazon EFS * Amazon Storage Gateway VTL

*Amazon Storage Gateway VTL* It's used to take the data backups to the cloud. ----------------------------------- *NOTE:* The question is asking about which of the below options is *incorrect* for storing of *important* documents in the cloud, and Option D is correct. The question is not asking about data archival, rather storing. So, Option D is not suited for our requirement.

*What is Amazon Glacier?* * A highly secure firewall designed to keep everything out. * A tool that allows to "freeze" an EBS volume. * An AWS service designed for long term data archival. *It is a tool used to resurrect deleted EC2 snapshots.

*An AWS service designed for long term data archival.*

*Amazon's EBS volumes are ________.* * Block based storage * Encrypted by default * Object based storage * Not suitable for databases

*Block based storage* EBS, EFS, and FSx are all storage services base on Block storage.

*How can you get visibility of user activity by recording the API calls made to your account?* * By using Amazon API Gateway * By using Amazon CloudWatch * By using AWS CloudTrail * By using Amazon Inspector

*By using AWS CloudTrail* ----------------------------------- Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudWatch is used to monitor cloud resources. AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources, and Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

*How do you integrate AWS with the directories running on-premise in your organization?* * By using AWS Direct Connect * By using a VPN * By using AWS Directory Service * Directly via the Internet

*By using AWS Directory Service* ----------------------------------- AWS Direct Connect and a VPN are used to connect your corporate data center with AWS. You cannot use the Internet directly to integrate directories; you need a service to integrate your on-premise directory to AWS.

*How can you have a shared file system across multiple Amazon EC2 instances?* * By using Amazon S3 * By mounting Elastic Block Storage across multiple Amazon EC2 servers * By using Amazon EFS * By using Amazon Glacier

*By using Amazon EFS* ----------------------------------- Amazon S3 is an object store, Amazon EBS can't be mounted across multiple servers, and Amazon Glacier is an extension of Amazon S3.

*What are the various way you can control access to the data stored in S3?* (Choose all that apply.) * By using IAM policy * By creating ACLs * By encrypting the files in a bucket * By making all the files public * By creating a separate folder for the secure files

*By using IAM policy.* *By creating ACLs* By encrypting the files in the bucket, you can make them secure, but it does not help in controlling the access. By making the files public, you are providing universal access to everyone. Creating a separate folder for secure files won't help because, again, you need to control the access of the separate folder.

*You work for a health insurance company that amasses a large number of patients' health records. Each record will be used once when assessing a customer, and will then need to be securely stored for a period of 7 years. In some rare cases, you may need to retrieve this data within 24 hours of a claim being lodged. Given these requirements, which type of AWS storage would deliver the least expensive solution?* * Glacier * S3 - IA * S3 - RRS * S3 * S3 - OneZone-IA

*Glacier* The recovery rate is a key decider. The record shortage must be; safe, durable, low cost, and the recovery can be slow. All features of Glacier.

*You have been asked to advise on a scaling concern. The client has a elegant solution that works well. As the information base grows they use CloudFormation to spin up another stack made up of an S3 bucket and supporting compute instances. The trigger for creating a new stack is when the PUT rate approaches 100 PUTs per second. the problem is that as the business grows that number of buckets is growing into the hundreds and will soon be in the thousands. You have been asked what can be done to reduce the number of buckets without changing the basic architecture.* * Change the trigger level to around 3000 as S3 can now accommodate much higher PUT and GET levels. * Refine the key hashing to randomise the name Key to achieve the potential of 300 PUTs per second. * Set up multiple accounts so that the per account hard limit on S3 buckets is avoided. * Upgrade all buckets to S3 provisioned IOPS to achieve better performance.

*Change the trigger level to around 3000 as S3 can now accommodate much higher PUT and GET levels.* Until 2018 there was a hard limit on S3 puts of 100 PUTs per second. To achieve this care needed to be taken with the structure of the name Key to ensure parallel processing. As of July 2018 the limit was raised to 3500 and the need for the Key design was basically eliminated. Disk IOPS is not the issue with the problem. The account limit is not the issue with the problem.

*Which of the following options allows users to have secure access to private files located in S3?* (Choose 3) * CloudFront Signed URLs * CloudFront Origin Access Identity * Public S3 buckets * CloudFront Signed Cookies

*CloudFront Signed URLs* *CloudFront Origin Access Identity* *CloudFront Signed Cookies* There are three options in the question which can be used to secure access to files stored in S3 and therefore can be considered correct. Signed URLs and Signed Cookies are different ways to ensure that users attempting access to files in an S3 bucket can be authorised. One method generates URLs and the other generates special cookies but they both require the creation of an application and policy to generate and control these items. An Origin Access Identity on the other hand, is a virtual user identity that is used to give the CloudFront distribution permission to fetch a private object from an S3 bucket. Public S3 buckets should never be used unless you are using the bucket to host a public website and therefore this is an incorrect option.

*In order to enable encryption at rest using EC2 and Elastic Block Store, you must ________.* * Mount the EBS volume in to S3 and then encrypt the bucket using a bucket policy. * Configure encryption when creating the EBS volume * Configure encryption using the appropriate Operating Systems file system * Configure encryption using X.509 certificates

*Configure encryption when creating the EBS volume* The use of encryption at rest is default requirement for many industry compliance certifications. Using AWS managed keys to provide EBS encryption at rest is a relatively painless and reliable way to protect assets and demonstrate your professionalism in any commercial situation.

*You are a developer at a fast growing start up. Until now, you have used the root account to log in to the AWS console. However, as you have taken on more staff, you will now need to stop sharing the root account to prevent accidental damage to your AWS infrastructure. What should you do so that everyone can access the AWS resources they need to do their jobs?* (Choose 2) * Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the credentials provided. * Create a customized sign in link such as "yourcompany.signin.aws.amazon.com/console" for your new users to use to sign in with. * Create an additional AWS root account for each new user. * Give your users the root account credentials so that they can also sign in.

*Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the credentials provided.* Create a customized sign in link such as "yourcompany.signin.aws.amazon.com/console" for your new users to use to sign in with.

*A company has decided to host a MongoDB database on an EC2 Instance. There is an expectancy of a large number of reads and writes on the database. Which of the following EBS storage types would be the ideal one to implement for the database?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Provisioned IOPS SSD* Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD.

*Which of the below are compute service from AWS?* * EC2 * S3 * Lambda * VPC

*EC2* *Lambda* Both Lambda and EC2 offer computing in the cloud. ----------------------------------- S3 is a storage offering while VPC is a network service.

*In which of the following is CloudFront content cached?* * Region * Data Center * Edge Location * Availability Zone

*Edge location* CloudFront content is cached in Edge Locations.

*What is the best way to protect a file in Amazon S3 against accidental delete?* * Upload the files in multiple buckets so that you can restore from another when a file is deleted * Back up the files regularly to a different bucket or in a different region * Enable versioning on the S3 bucket * Use MFA for deletion * Use cross-region replication

*Enable versioning on the S3 bucket* You can definitely upload the file in multiple buckets, but the cost will increase the number of times you are going to store the files. Also, now you need to manage three or four times more files. What about mapping files to applications? This does not make sense. Backing up files regularly to a different bucket can help you to restore the file to some extent. What if you have uploaded a new file just after taking the backup? The correct answer is versioning since enabling versioning maintains all the versions of the file and you can restore from any version even if you have deleted the file. You can definitely use MFA for delete, but what if even with MFA you delete a wrong file? With CRR, if a DELETE request is made without specifying an object version ID, Amazon S3 adds a delete marker, which cross-region replication replicates to the destination bucket. If a DELETE request specifies a particular object version ID to delete, Amazon S3 deletes that object version in the source bucket, but it does not replicate the deletion in the destination bucket.

*When creating a new security group, all inbound traffic is allowed by default.* * False * True

*False* There are slight differences between a normal 'new' Security Group and a 'default' security group in the default VPC. For an 'new' security group nothing is allowed in by default.

*A new employee has just started work, and it is your job to give her administrator access to the AWS console. You have given her a user name, an access key ID, a secret access key, and you have generated a password for her. She is now able to log in to the AWS console, but she is unable to interact with any AWS services. What should you do next?* * Tell her to log out and try logging back in again. *Ensure she is logging in to the AWS console from your corporate network and not the normal internet. * Grant her Administrator access by adding her to the Administrators' group. * Require multi-factor authentication for her user account.

*Grant her Administrator access by adding her to the Administrators' group.*

*You have uploaded a file to S3. Which HTTP code would indicate that the upload was successful?* * HTTP 200 * HTTP 404 * HTTP 307 * HTTP 501

*HTTP 200*

*Which statement best describes IAM?* * IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform. * IAM stands for Improvised Application Management, and it allows you to deploy and manage applications in the AWS Cloud. * IAM allows you to manage permissions for AWS resources only. * IAM allows you to manage users' passwords only. AWS staff must create new users for your organization. This is done by raising a ticket.

*IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform.*

*Which of the following is not a feature of IAM?* * IAM offers fine-grained access control to AWS resources. * IAM allows you to setup biometric authentication, so that no passwords are required. * IAM integrates with existing active directory account allowing single sign-on. * IAM offers centralized control of your AWS account.

*IAM allows you to setup biometric authentication, so that no passwords are required.*

What is an additional way to secure the AWS accounts of both the root account and new users alike? * Configure the AWS Console so that you can only log in to it from a specific IP Address range * Store the access key id and secret access key of all users in a publicly accessible plain text document on S3 of which only you and members of your organization know the address to. * Configure the AWS Console so that you can only log in to it from your internal network IP address range. * Implement Multi-Factor Authentication for all accounts.

*Implement Multi-Factor Authentication for all accounts.*

*What is AWS Storage Gateway?* * It allows large scale import/exports in to the AWS cloud without the use of an internet connection. * None of the above. * It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site. * It allows a direct MPLS connection in to AWS.

*It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site.* At its heart it is a way of using AWS S3 managed storage to supplement on-premise storage. It can also be used within a VPC in a similar way.

*In what language are policy documents written?* * JSON * Python * Java * Node.js

*JSON*

*You are consulting to a mid-sized company with a predominantly Mac & Linux desktop environment. In passing they comment that they have over 30TB of unstructured Word and spreadsheet documents of which 85% of these documents don't get accessed again after about 35 days. They wish that they could find a quick and easy solution to have tiered storage to store these documents in a more cost effective manner without impacting staff access. What options can you offer them?* (Choose 2) * Migrate documents to EFS storage and make use of life-cycle using Infrequent Access storage. * Migrate the document store to S3 storage and make use of life-cycle using Infrequent Access storage. * Migrate documents to File Gateway presented as iSCSI and make use of life-cycle using Infrequent Access storage. * Migrate documents to File Gateway presented as NFS and make use of life-cycle using Infrequent Access storage.

*Migrate documents to EFS storage and make use of life-cycle using Infrequent Access storage.* *Migrate documents to File Gateway presented as NFS and make use of life-cycle using Infrequent Access storage.* ----------------------------------- Trying to use S3 without File Gateway in front would be a major impact to the user environment. Using File Gateway is the recommended way to use S3 with shared document pools. Life-cycle management and Infrequent Access storage is available for both S3 and EFS. A restriction however is that 'Using Amazon EFS with Microsoft Windows is not supported'. File Gateway does not support iSCSI in the client side.

*An AWS VPC is a component of which group of AWS services?* * Database Services * Networking Services * Compute Services * Global Infrastructure

*Networking Services* A Virtual Private Cloud (VPC) is a virtual network dedicated to a single AWS account. ----------------------------------- It is logically isolated from other virtual networks in the AWS cloud, providing compute resources with security and robust networking functionality.

*Every user you create in the IAM systems starts with ________.* * Full Permissions * No Permissions * Partial Permissions

*No Permissions*

*Every user you create in the IAM systems starts with ________.* * Partial Permissions * No Permissions * Full Permissions

*No Permissions*

*What is the default level of access a newly created IAM User is granted?* * Administrator access to all AWS services. * Power user access to all AWS services. * No access to any AWS services. * Read only access to all AWS services.

*No access to any AWS services.*

*Can I move a reserved instance from one region to another?* * It depends on the region. * Only in the US. * Yes. * No.

*No* Depending on you type of RIs you can modify the; AZ, scope, network platform, or instance size (within the same instance type). But not Region. In some circumstances you can sell RIs, but only if you have a US bank account.

*Which of the below are factors that have helped make public cloud so powerful?* (Choose 2) * Traditional methods that are used for on-premise infrastructure work just as well in cloud * No special skills required * The ability to try out new ideas and experiment without upfront commitment * Not having to deal with the collateral damage of failed experiments

*The ability to try out new ideas and experiment without upfront commitment.* *Not having to deal with the collateral damage of failed experiments.* Public cloud allows organisations to try out new ideas, new approaches and experiment with little upfront commitment. ----------------------------------- If it doesn't work out, organisations have the ability to terminate the resources and stop paying for them

*You work for a major news network in Europe. They have just released a new mobile app that allows users to post their photos of newsworthy events in real time. Your organization expects this app to grow very quickly, essentially doubling its user base each month. The app uses S3 to store the images, and you are expecting sudden and sizable increases in traffic to S3 when a major news event takes place (as users will be uploading large amounts of content.) You need to keep your storage costs to a minimum, and it does not matter if some objects are lost. With these factors in mind, which storage media should you use to keep costs as low as possible?* * S3 - Provisioned IOPS * S3 - One Zone-Infrequent Access * S3 - Infrequently Accessed Storage * Glacier * S3 - Reduced Redundancy Storage (RRS)

*One Zone-Infrequent Access* The key driver here is cost, so an awareness of cost is necessary to answer this. ----------------------------------- Full S3 is quite expensive at around $0.023 per GB for the lowest band. S3 standard IA is $0.0125 per GB, S3 One-Zone-IA is $0.01 per GB, and Legacy S3-RRS is around $0.024 per GB for the lowest band. Of the offered solutions SS3 One-Zone-IA is the cheapest suitable option. Glacier cannot be considered as it is not intended for direct access, however it comes in at around $0.004 per GB. Of course you spotted that RRS is being deprecated, and there is no such thing as S3 - Provisioned IOPS. In this case OneZone IA should be fine as users will 'post' material but only the organization will access it and only to find relevant material. The question states that there is no concern if some material is lost.

*Will an Amazon EBS root volume persist independently from the life of the terminated EC2 instance to which it was previously attached? In other words, if I terminated an EC2 instance, would that EBS root volume persist?* * It depends on the region in which the EC2 instance is provisioned. * Yes. * Only if I specify (using either the AWS Console or the CLI) that it should do so. * No.

*Only if I specify (using either the AWS Console or the CLI) that it should do so* *You can control whether an EBS root volume is deleted when its associated instance is terminated.* The default delete-on-termination behaviour depends on whether the volume is a root volume, or an additional volume. By default, the DeleteOnTermination attribute for root volumes is set to 'true.' However, this attribute may be changed at launch by using either the AWS Console or the command line. For an instance that is already running, the DeleteOnTermination attribute must be changed using the CLI.

*Which of the following is not a component of IAM?* * Organizational Units * Groups * Users * Roles

*Organizational Units*

*A __________ is a document that provides a formal statement of one or more permissions.* * Policy * Role * User * Group

*Policy*

*Which of the below are database services from AWS?* (Choose 2) * RDS * S3 * DynamoDB * EC2

*RDS* *DynamoDB* RDS is a service for relational database provided by AWS. DynamoDB is AWS' fast, flexible, no-sql database service. ----------------------------------- S3 provides the ability to store files in the cloud and is not suitable for databases, while EC2 is part of the compute family of services.

*S3 has what consistency model for PUTS of new objects* * Write After Read Consistency * Eventual Consistency * Read After Write Consistency * Usual Consistency

*Read After Write Consistency*

*What is each unique location in the world where AWS has a cluster of data centers called?* * Region * Availability zone * Point of presence * Content delivery network

*Region* ----------------------------------- AZs are inside a region, so they are not unique. POP and content delivery both serve the purpose of speeding up distribution.

*You run a popular photo sharing website that depends on S3 to store content. Paid advertising is your primary source of revenue. However, you have discovered that other websites are linking directly to the images in your buckets, not to the HTML pages that serve the content. This means that people are not seeing the paid advertising, and you are paying AWS unnecessarily to serve content directly from S3. How might you resolve this issue?* * Use security groups to blacklist the IP addresses of the sites that link directly to your S3 bucket. * Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates. * Use CloudFront to serve the static content. * Use EBS rather than S3 to store the content.

*Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates.*

You need to know both the private IP address and public IP address of your EC2 instance. You should ________. * Use the following command: AWS EC2 DisplayIP. * Retrieve the instance User Data from http://169.254.169.254/latest/meta-data/. * Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/. * Run IPCONFIG (Windows) or IFCONFIG (Linux).

*Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/.* Instance Metadata and User Data can be retrieved from within the instance via a special URL. Similar information can be extracted by using the API via the CLI or an SDK.

*You run a meme creation website that stores the original images in S3 and each meme's meta data in DynamoDB. You need to decide upon a low-cost storage option for the memes, themselves. If a meme object is unavailable or lost, a Lambda function will automatically recreate it using the original file from S3 and the metadata from DynamoDB. Which storage solution should you use to store the non-critical, easily reproducible memes in the most cost effective way?* * S3 - 1Zone-IA * S3 - RRS * S3 - IA * Glacier * S3

*S3 - 1Zone-IA* S3 - OneZone-IA is the recommended storage for when you want cheaper storage for infrequently accessed objects. It has the same durability but less availability. There can be cost implications if you use it frequently or use it for short lived storage. ----------------------------------- Glacier is cheaper, but has a long retrieval time. RRS has effectively been deprecated. It still exists but is not a service that AWS want to sell anymore.

*Which of the following are a part of AWS' Network and Content Delivery services?* (Choose 2) * VPC * RDS * EC2 * CloudFront

*VPC* *CloudFront* VPC allow you to provision a logically isolated section of the AWS where you can launch AWS resource in a virtual network. Cloudfront is fast, highly secure and programmable content delivery network (CDN). ------------------------- EC2 provides compute resources while RDS is Amazon's Relational Database System.

*What is an Amazon VPC?* * Virtual Private Compute * Virtual Public Compute * Virtual Private Cloud * Virtual Public Cloud

*Virtual Private Cloud* VPC stands for Virtual Private Cloud.

*You work for a busy digital marketing company who currently store their data on premise. They are looking to migrate to AWS S3 and to store their data in buckets. Each bucket will be named after their individual customers, followed by a random series of letters and numbers. Once written to S3 the data is rarely changed, as it has already been sent to the end customer for them to use as they see fit. However on some occasions, customers may need certain files updated quickly, and this may be for work that has been done months or even years ago. You would need to be able to access this data immediately to make changes in that case, but you must also keep your storage costs extremely low. The data is not easily reproducible if lost. Which S3 storage class should you choose to minimise costs and to maximize retrieval times?* * S3 * S3 - IA * S3 - RRS * Glacier * S3 - 1Zone-IA

*S3 - IA* The need to immediate access is an important requirement along with cost. Glacier has a long recovery time at a low cost or a shorter recovery time at a high cost, and 1Zone-IA has a lower Availability level which means that it may not be available when needed.

*What are the different storage classes that Amazon S3 offers?* (Choose all that apply.) * S3 Standard * S3 Global * S3 CloudFront * S3 US East * S3 IA

*S3 Standard* *S3 IA* S3 Global is a region and not a storage class. Amazon CloudFront is a CDN and not a storage class. US East is a region and not a storage class.

*Which of the below are storage services in AWS?* (Choose 2) * S3 * EFS * EC2 * VPC

*S3* *EFS* S3 and EFS both provide the ability to store files in the cloud. ------------------------- EC2 provides compute, and is often augmented with other storage services. VPC is networking service.

*In addition to choosing the correct EBS volume type for your specific task, what else can be done to increase the performance of your volume?* (Choose 3) * Schedule snapshots of HDD based volumes for periods of low use * Stripe volumes together in a RAID 0 configuration. * Never use HDD volumes, always ensure that SSDs are used * Ensure that your EC2 instances are types that can be optimised for use with EBS

*Schedule snapshots of HDD based volumes for periods of low use* *Stripe volumes together in a RAID 0 configuration.* *Ensure that your EC2 instances are types that can be optimised for use with EBS* There are a number of ways you can optimise performance above that of choosing the correct EBS type. One of the easiest options is to drive more I/O throughput than you can provision for a single EBS volume, by striping using RAID 0. You can join multiple gp2, io1, st1, or sc1 volumes together in a RAID 0 configuration to use the available bandwidth for these instances. You can also choose an EC2 instance type that supports EBS optimisation. This ensures that network traffic cannot contend with traffic between your instance and your EBS volumes. The final option is to manage your snapshot times, and this only applies to HDD based EBS volumes. When you create a snapshot of a Throughput Optimized HDD (st1) or Cold HDD (sc1) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress. This behaviour is specific to these volume types. Therefore you should ensure that scheduled snapshots are carried at times of low usage. The one option on the list which is entirely incorrect is the option that states "Never use HDD volumes, always ensure that SSDs are used" as the question first states "In addition to choosing the correct EBS volume type for your specific task". HDDs may well be suitable to certain tasks and therefore they shouldn't be discounted because they may not have the highest specification on paper.

*You want to move all the files older than a month to S3 IA. What is the best way of doing this?* * Copy all the files using the S3 copy command * Set up a lifecycle rule to move all the files to S3 IA after a month * Download the files after a month and re-upload them to another S3 bucket with IA * Copy all the files to Amazon Glacier and from Amazon Glacier copy them to S3 IA

*Set up a lifecycle rule to move all the files to S3 IA after a month.* Copying all the files using the S3 copy command is going to be a painful activity if you have millions of objects. Doing this when you can do the same thing by automatically downloading and re-uploading the files does not make any sense and wastes a lot of bandwidth and manpower. Amazon Glacier is used mainly for archival storage. You should not copy anything into Amazon Glacier unless you want to archive the files.

*You have a client who is considering a move to AWS. In establishing a new account, what is the first thing the company should do?* * Set up an account using Cloud Search. * Set up an account using their company email address. * Set up an account via SQS (Simple Queue Service). * Set up an account via SNS (Simple Notification Service)

*Set up an account using their company email address.*

*What does S3 stand for?* * Simplified Serial Sequence * Simple SQL Service * Simple Storage Service * Straight Storage Service

*Simple Storage Service*

*You have developed a new web application in the US-West-2 Region that requires six Amazon Elastic Compute Cloud (EC2) instances to be running at all times. US-West-2 comprises three Availability Zones (us-west-2a, us-west-2b, and us-west-2c). You need 100 percent fault tolerance: should any single Availability Zone in us-west-2 become unavailable, the application must continue to run. How would you make sure 6 servers are ALWAYS available?* NOTE: each answer has 2 possible deployment configurations.* (Choose 2) * Solution 1: us-west-2a with two EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances. Solution 2: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. * Solution 1: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. Solution 2: us-west-2a with four EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances. * Solution 1: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. * Solution 1: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with no EC2 instances. Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.

*Solution 1: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances.* *Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.* You need to work through each case to find which will provide you with the required number of running instances even if one AZ is lost. Hint: always assume that the AZ you lose is the one with the most instances. Remember that the client has stipulated that they MUST have 100% fault tolerance.

*What is the main purpose of Amazon Glacier?* (Choose all that apply.) * Storing hot, frequently used data * Storing archival data * Storing historical or infrequently accessed data * Storing the static content of a web site * Creating a cross-region replication bucket for Amazon S3

*Storing archival data* *Storing historical or infrequently accessed data* Hot and frequently used data needs to be stored in Amazon S3; you can also use Amazon CloudFront to cache the frequently used data. ----------------------------------- Amazon Glacier is used to store the archive copies of the data or historical data or infrequent data. You can make lifecycle rules to move all the infrequently accessed data to Amazon Glacier. The static content of the web site can be stored in Amazon CloudFront in conjunction with Amazon S3. You can't use Amazon Glacier for a cross-region replication bucket of Amazon S3; however, you can use S3 IA or S3 RRS in addition to S3 Standard as a replication bucket for CRR.

*Amazon S3 provides 99.999999999 percent durability. Which of the following are true statements?* (Choose all that apply.) * The data is mirrored across multiple AZs within a region. * The data is mirrored across multiple regions to provide the durability SLA. * The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities. * The data is regularly backed up to AWS Snowball to provide the durability SLA. * The data is automatically mirrored to Amazon Glacier to achieve high availability.

*The data is mirrored across multiple AZs within a region.* *The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities.* If you have created an S3 bucket in a global region, it will always stay there unless you manually move the data to a different region. Amazon does not back up data residing in S3 to anywhere else since the data is automatically mirrored across multiple facilities. However, customers can replicate the data to a different region for additional safety. AWS Snowball is used to migrate on-premises data to S3. Amazon Glacier is the archival storage of S3, and an automatic mirror of regular Amazon S3 data does not make sense. However, you can write lifecycle rules to move historical data from Amazon S3 to Amazon Glacier.

*I shut down my EC2 instance, and when I started it, I lost all my data. What could be the reason for this?* * The data was stored in the local instance store. * The data was stored in EBS but was not backed up to S3. * I used an HDD-backed EBS volume instead of an SSD-backed EBS volume. * I forgot to take a snapshot of the instance store.

*The data was stored in the local instance store.* The only possible reason is that the data was stored in a local instance store that is not persisted once the server is shut down. If the data stays in EBS, then it does not matter if you have taken the backup or not; the data will always persist. Similarly, it does not matter if it is an HDD- or SSD-backed EBS volume. You can't take a snapshot of the instance store.

*The data across the EBS volume is mirrored across which of the following?* * Multiple AZs * Multiple regions * The same AZ * EFS volumes mounted to EC2 instances

*The same AZ* Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations in the same AZ. Amazon EBS replication is stored within the same availability zone, not across multiple zones.

*To set up a cross-region replication, what statements are true?* (Choose all that apply.) * The source and target bucket should be in a same region. * The source and target bucket should be in different region. * You must choose different storage classes across different regions. * You need to enable versioning and must have an IAM policy in place to replicate. * You must have at least ten files in a bucket.

*The source and target bucket should be in different region.* *You need to enable versioning and must have an IAM policy in place to replicate.* Cross-region replication can't be used to replicate the objects in the same region. However, you can use the S3 copy command or copy the files from the console to move the objects from one bucket to another in the same region. You can choose a different class of storage for CRR; however, this option is not mandatory, and you can use the same class of storage as the source bucket as well. There is no minimum number of file restriction to enable cross-region replication. You can even use CRR when there is only one file in an Amazon S3 bucket.

*Which of the following provide the lowest cost EBS options?* (Choose 2) * Throughput Optimized (st1) * Provisioned IOPS (io1) * General Purpose (gp2) * Cold (sc1)

*Throughput Optimized (st1), Cold (sc1)* Of all the EBS types, both current and of the previous generation, HDD based volumes will always be less expensive than SSD types. Therefore, of the options available in the question, the Cold (sc1) and Throughout Optimized (st1) types are HDD based and will be the lowest cost options.

*I can change the permissions to a role, even if that role is already assigned to an existing EC2 instance, and these changes will take effect immediately.* * False * True

*True*

*Using SAML (Security Assertion Markup Language 2.0), you can give your federated users single sign-on (SSO) access to the AWS Management Console.* * True * False

*True*

*You can add multiple volumes to an EC2 instance and then create your own RAID 5/RAID 10/RAID 0 configurations using those volumes.* * True * False

*True*

*How much data can you store on S3?* * 1 petabyte per account * 1 exabyte per account * 1 petabyte per region * 1 exabyte per region * Unlimited

*Unlimited* Since the capacity of S3 is unlimited, you can store as much data you want there.

*You have been tasked with moving petabytes of data to the AWS cloud. What is the most efficient way of doing this?* * Upload them to Amazon S3 * Use AWS Snowball * Use AWS Server Migration Service * Use AWS Database Migration Service

*Use AWS Snowball* ----------------------------------- You can also upload data to Amazon S3, but if you have petabytes of data and want to upload it to Amazon S3, it is going to take a lot of time. The quickest way would be to leverage AWS Snowball. AWS Server Migration Service is an agentless service that helps coordinate, automate, schedule, and track large-scale server migrations, whereas AWS Database Migration Service is used to migrate the data of the relational database or data warehouse.

*What is the best way to get better performance for storing several files in S3?* * Create a separate folder for each file * Create separate buckets in different region * Use a partitioning strategy for storing the files * Use a partitioning strategy for storing the files

*Use a partitioning strategy for storing the files* Creating a separate folder does not improve performance. What if you need to store millions of files in these separate folders? Similarly, creating separate folders in a different region does not improve the performance. There is no such rule of storing 100 files per bucket.

*What is the best way to delete multiple objects from S3?* * Delete the files manually using a console * Use multi-object delete * Create a policy to delete multiple files * Delete all the S3 buckets to delete the files

*Use multi-object delete* Manually deleting the files from the console is going to take a lot of time. You can't create a policy to delete multiple files. Deleting buckets in order to delete files is not a recommended option. What if you need some files from the bucket?

*When you create a new user, that user ________.* * Will only be able to log in to the console in the region in which that user was created. * Will be able to interact with AWS using their access key ID and secret access key using the API, CLI, or the AWS SDKs. * Will be able to log in to the console only after multi-factor authentication is enabled on their account. * Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.

*Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.* ----------------------------------- To access the console you use an account and password combination. To access AWS programmatically you use a Key and Secret Key combination.

*What is the underlying Hypervisor for EC2?* (Choose 2) * Xen * Nitro * Hyper-V * ESX * OVM

*Xen* *Nitro* Until very recently AWS exclusively used Xen Hypervisors, Recently they started making use of Nitro Hypervisors.

*Can a placement group be deployed across multiple Availability Zones?* * Yes. * Only in Us-East-1. * No. * Yes, but only using the AWS API.

*Yes* Technically they are called Spread or Partition placement groups. Now you can have placement groups across different hardware and multiple AZs.

*Is it possible to perform actions on an existing Amazon EBS Snapshot?* * It depends on the region. * EBS does not have snapshot functionality. * Yes, through the AWS APIs, CLI, and AWS Console. * No.

*Yes, through the AWS APIs, CLI, and AWS Console.*

*You are a security administrator working for a hotel chain. You have a new member of staff who has started as a systems administrator, and she will need full access to the AWS console. You have created the user account and generated the access key id and the secret access key. You have moved this user into the group where the other administrators are, and you have provided the new user with their secret access key and their access key id. However, when she tries to log in to the AWS console, she cannot. Why might that be?* * Your user is trying to log in from the AWS console from outside the corporate network. This is not possible. * You have not yet activated multi-factor authentication for the user, so by default they will not be able to log in. * You cannot log in to the AWS console using the Access Key ID / Secret Access Key pair. Instead, you must generate a password for the user, and supply the user with this password and your organization's unique AWS console login URL. * You have not applied the "log in from console" policy document to the user. You must apply this first so that they can log in.

*You cannot log in to the AWS console using the Access Key ID / Secret Access Key pair. Instead, you must generate a password for the user, and supply the user with this password and your organization's unique AWS console login URL.*

*You are a solutions architect working for a large engineering company who are moving from a legacy infrastructure to AWS. You have configured the company's first AWS account and you have set up IAM. Your company is based in Andorra, but there will be a small subsidiary operating out of South Korea, so that office will need its own AWS environment. Which of the following statements is true?* * You will then need to configure Users and Policy Documents for each region respectively. * You will need to configure your users regionally, however your policy documents are global. * You will need to configure your policy documents regionally, however your users are global. * You will need to configure Users and Policy Documents only once, as these are applied globally.

*You will need to configure Users and Policy Documents only once, as these are applied globally.*

*The use of a cluster placement group is ideal _______* * When you need to distribute content on a CDN network. * Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone. * When you need to deploy EC2 instances that require high disk IO. * Your fleet of EC2 Instances requires low latency and high network throughput across multiple availability zones.

*Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone.* Cluster Placement Groups are primarily about keeping you compute resources within one network hop of each other on high speed rack switches. This is only helpful when you have compute loads with network loads that are either very high or very sensitive to latency.

*Which AWS CLI command should I use to create a snapshot of an EBS volume?* * aws ec2 deploy-snapshot * aws ec2 create-snapshot * aws ec2 new-snapshot * aws ec2 fresh-snapshot

*aws ec2 create-snapshot*

*The difference between S3 and EBS is that EBS is object based where as S3 is block based.* * true * false

*false*

*To retrieve instance metadata or user data you will need to use the following IP Address:* * http://169.254.169.254 * http://10.0.0.1 * http://127.0.0.1 * http://192.168.0.254

*http://169.254.169.254*

*You have been asked by your company to create an S3 bucket with the name "acloudguru1234" in the EU West region. What would the URL for this bucket be?* * https://s3-eu-west-1.amazonaws.com/acloudguru1234 * https://s3.acloudguru1234.amazonaws.com/eu-west-1 * https://s3-us-east-1.amazonaws.com/acloudguru1234 * https://s3-acloudguru1234.amazonaws.com/

*https://s3-eu-west-1.amazonaws.com/acloudguru1234*

*S3 has eventual consistency for which HTTP Methods?* * UPDATES and DELETES * overwrite PUTS and DELETES * PUTS of new Objects and DELETES * PUTS of new objects and UPDATES

*overwrite PUTS and DELETES*


Set pelajaran terkait

Hazmat - Will Not Carry - Limitations

View Set

Chapter 7: Squares, Cubes, & Roots (Big Ideas Math)

View Set

Net Essentials Midterm Study Guide

View Set