Solutions Architect - Associates - BH

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Disabling automated backups disables the point-in-time recovery feature. A. True B. False

A. True

*If you want to speed up the distribution of your static and dynamic web content such as HTML, CSS, image, and PHP files, which service would you consider?* * Amazon S3 * Amazon EC2 * Amazon Glacier * Amazon CloudFront

*Amazon CloudFront* ----------------------------------- Amazon S3 can be used to store objects; it can't speed up the operations. Amazon EC2 provides the compute. Amazon Glacier is the archive storage.

*In order to enable encryption at rest using EC2 and Elastic Block Store, you must ________.* * Mount the EBS volume in to S3 and then encrypt the bucket using a bucket policy. * Configure encryption when creating the EBS volume * Configure encryption using the appropriate Operating Systems file system * Configure encryption using X.509 certificates

*Configure encryption when creating the EBS volume* The use of encryption at rest is default requirement for many industry compliance certifications. Using AWS managed keys to provide EBS encryption at rest is a relatively painless and reliable way to protect assets and demonstrate your professionalism in any commercial situation.

*Which of the following is a true statement?* (Choose two.) * ELB can distribute traffic across multiple regions. * ELB can distribute across multiple AZs but not across multiple regions. * ELB can distribute across multiple AZs. * ELB can distribute traffic across multiple regions but not across multiple AZs.

*ELB can distribute across multiple AZs but not across multiple regions.* *ELB can distribute across multiple AZs.* ELB can span multiple AZs within a region. It cannot span multiple regions.

*In which of the following is CloudFront content cached?* * Region * Data Center * Edge Location * Availability Zone

*Edge location* CloudFront content is cached in Edge Locations.

*You have uploaded a file to S3. Which HTTP code would indicate that the upload was successful?* * HTTP 200 * HTTP 404 * HTTP 307 * HTTP 501

*HTTP 200*

EBS Snapshots are backed up to S3 in what manner? * Exponentially * Decreasingly * EBS snapshots are not stored on S3. * Incrementally

*Incrementally*

*To connect your corporate data center to AWS, you need at least which of the following components?* (Choose two.) * Internet gateway * Virtual private gateway * NAT gateway * Customer gateway

*Internet gateway* *NAT gateway* To connect to AWS from your data center, you need a customer gateway, which is the customer side of a connection, and a virtual private gateway, which is the AWS side of the connection. An Internet gateway is used to connect a VPC with the Internet, whereas a NAT gateway connects to the servers running in the private subnet in order to connect to the Internet.

*Which of the following AWS Services were introduced at re:Invent 2016* * Lex * Dax * Molly * Polly

*Lex* *Polly* Amazon Lex is a service for building conversational interfaces using voice and text. Polly is a service that turns text into lifelike speech. AWS exams will test you on your ability to identify real vs. imaginary services. This question cadn be answered based on your knowledge of common services and requires no knowledge of re:invent announcements.

*Which of the following is not a component of IAM?* * Organizational Units * Groups * Users * Roles

*Organizational Units*

*A __________ is a document that provides a formal statement of one or more permissions.* * Policy * Role * User * Group

*Policy*

*The listener within a load balancer needs two details in order to listen to incoming traffic. What are they?* (Choose two.) * Type of the operating system * Port number * Protocol * IP address

*Port number* *Protocol* Listeners define the protocol and port on which the load balancer listens for incoming connections.

*You need to know both the private IP address and public IP address of your EC2 instance. You should ________.* * Run IPCONCIG (Windows) or IFCONFIG (Llinux). * Retrieve the instance User Data from http://169.254.169.254/latest/meta-data/. * Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/. * Use the following command: AWS EC2 DisplayIP.

*Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/* Instance Metadata and User Data can be retrieved from within the instance via a special URL. Similar information can be extracted by using the API via the CLI or an SDK.

*I shut down my EC2 instance, and when I started it, I lost all my data. What could be the reason for this?* * The data was stored in the local instance store. * The data was stored in EBS but was not backed up to S3. * I used an HDD-backed EBS volume instead of an SSD-backed EBS volume. * I forgot to take a snapshot of the instance store.

*The data was stored in the local instance store.* The only possible reason is that the data was stored in a local instance store that is not persisted once the server is shut down. If the data stays in EBS, then it does not matter if you have taken the backup or not; the data will always persist. Similarly, it does not matter if it is an HDD- or SSD-backed EBS volume. You can't take a snapshot of the instance store.

*You're running a mission-critical application, and you are hosting the database for that application in RDS. Your IT team needs to access all the critical OS metrics every five seconds. What approach would you choose?* * Write a script to capture all the key metrics and schedule the script to run every five seconds using a cron job * Schedule a job every five seconds to capture the OS metrics * Use standard monitoring * Use advanced monitoring

*Use advanced monitoring* In RDS, you don't have access to OS, so you can't run a cron job. You can't capture the OS metrics by running a database job. Standard monitoring provides metrics for one minute.

14. Which of the following can be accomplished through bootstrapping? A. Install the most current security updates. B. Install the current version of the application. C. Configure Operating System (OS) services. D. All of the above.

14. D. Bootstrapping runs the provided script, so anything you can accomplish in a script you can accomplish during bootstrapping.

How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability? 1. Data is automatically replicated to other regions. 2. Data is automatically replicated within a region. 3. Data is replicated only if versioning is enabled on the bucket. 4. Data is automatically backed up on tape and restored if needed.

2 <strong>B.</strong><br>Data is automatically replicated within a region. Replication to other regions and versioning are optional. Amazon S3 data is not backed up to tape.

Which of the following cache engines are supported by Amazon ElastiCache? (Choose 2 answers) 1. MySQL 2. Memcached 3. Redis 4. Couchbase

2,3 <strong>B,C.</strong><br>Amazon ElastiCache supports Memcached and Redis cache engines. MySQL is not a cache engine, and Couchbase is not supported.

2. Which AWS database service is best suited for non-relational databases? A. Amazon Redshift B. Amazon Relational Database Service (Amazon RDS) C. Amazon Glacier D. Amazon DynamoDB

2. D. Amazon DynamoDB is best suited for non-relational databases. Amazon RDS and Amazon Redshift are both structured relational databases.

What is the format of an IAM policy? 1. XML 2. Key/value pairs 3. JSON 4. Tab-delimited text

3 <strong>C.</strong><br>An IAM policy is a JSON document.

3. How many nodes can you add to an Amazon ElastiCache cluster running Memcached? A. 1 B. 5 C. 20 D. 100

3. C. The default limit is 20 nodes per cluster.

Which DNS record must all zones have by default? 1. SPF 2. TXT 3. MX 4. SOA

4 <strong>D.</strong><br>The start of a zone is defined by the SOA; therefore, all zones must have an SOA record by default.

4. What is the maximum size IP address range that you can have in an Amazon VPC? A. /16 B. /24 C. /28 D. /30

4. A. The maximum size subnet that you can have in a VPC is /16.

8. In Amazon Simple Workflow Service (Amazon SWF), which of the following are actors? (Choose 3 answers) A. Activity workers B. Workflow starters C. Deciders D. Activity tasks

8. A, B, C. In Amazon SWF, actors can be activity workers, workflow starters, or deciders.

9. What aspect of an Amazon VPC is stateful? A. Network ACLs B. Security groups C. Amazon DynamoDB D. Amazon S3

9. B. Security groups are stateful, whereas network ACLs are stateless.

Amazon RDS DB snapshots and automated backups are stored in A. Amazon S3 B. Amazon ECS Volume C. Amazon RDS D. Amazon EMR

A. Amazon S3

How can I change the security group membership for interfaces owned by other AWS, such as Elastic Load Balancing? A. By using the service specific console or API\CLI commands B. None of these C. Using Amazon EC2 API/CLI D. Using all these methods

A. By using the service specific console or API\CLI commands

What is an isolated database environment running in the cloud (Amazon RDS) called? A. DB Instance B. DB Unit C. DB Server D. DB Volume

A. DB Instance

How are the EBS snapshots saved on Amazon S3? A. Exponentially B. Incrementally C. EBS snapshots are not stored in the Amazon S3 D. Decrementally

B. Incrementally

Fill in the blanks: "To ensure failover capabilities, consider using a _____ for incoming traffic on a network interface". A. primary public IP B. secondary private IP C. secondary public IP D. add on secondary IP

B. secondary private IP

In a management network scenario, which interface on the instance handles public-facing traffic? A. Primary network interface B. Subnet interface C. Secondary network interface

C. Secondary network interface

You must assign each server to at least _____ security group A. 3 B. 2 C. 4 D. 1

D. 1

Please select the Amazon EC2 resource which can be tagged. A. Key pairs B. Elastic IP addresses C. Placement groups D. EBS snapshots

D. EBS snapshots

*How do you integrate AWS with the directories running on-premise in your organization?* * By using AWS Direct Connect * By using a VPN * By using AWS Directory Service * Directly via the Internet

*By using AWS Directory Service* ----------------------------------- AWS Direct Connect and a VPN are used to connect your corporate data center with AWS. You cannot use the Internet directly to integrate directories; you need a service to integrate your on-premise directory to AWS.

*How much data can you store on S3?* * 1 petabyte per account * 1 exabyte per account * 1 petabyte per region * 1 exabyte per region * Unlimited

*Unlimited* Since the capacity of S3 is unlimited, you can store as much data you want there.

3. How long does Amazon CloudWatch keep metric data? A. 1 day B. 2 days C. 1 week D. 2 weeks

3. D. Amazon CloudWatch metric data is kept for 2 weeks.

*MySQL installations default to port number ________.* * 3306 * 1433 * 3389 * 80

3306

*In RDS, what is the maximum value I can set for my backup retention period?* * 35 Days * 15 Days * 45 Days * 30 Days

35 Days

Which of the following is not a supported Amazon Simple Notification Service (Amazon SNS) protocol? 1. HTTPS 2. AWS Lambda 3. Email-JSON 4. Amazon DynamoDB

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">Amazon DynamoDB is not a supported Amazon SNS protocol.</span>

*You can RDP or SSH in to an RDS instance to see what is going on with the operating system.* * True * False

False

10. Which cache engines does Amazon ElastiCache support? (Choose 2 answers) A. Memcached B. Redis C. Membase D. Couchbase

10. A, B. Amazon ElastiCache supports both Memcached and Redis. You can run selfmanaged installations of Membase and Couchbase using Amazon Elastic Compute Cloud (Amazon EC2).

You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose two.) A. Route 53 Record Sets B. IAM Roles C. Elastic IP Addresses (EIP) D. EC2 Key Pairs E. Launch configurations F. Security Groups

A. Route 53 Record Sets B. IAM Roles

While performing volume status checks using volume status checks, if the status is insufficient-data, if the status is 'insufficient-data', what does it mean? A. checks may still be in progress on the volume B. check has passed C. check has failed D. there is no such status

A. checks may still be in progress on the volume

Amazon RDS creates an SSL certificate and installs the certificate on the DB Instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The _____ is stored at https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem. A. private key B. foreign key C. public key D. protected key

A. private key

You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers A. Amazon CloudWatch B. Amazon Relational Database Service (RDS) C. Elastic Load Balancing D. Amazon ElastiCache E. AWS Storage Gateway F. Amazon DynamoDB

B. Amazon Relational Database Service (RDS) D. Amazon ElastiCache F. Amazon DynamoDB

If I scale the storage capacity provisioned to my DB Instance by mid of a billing month, how will I be charged? A. you will be charged for the highest storage capacity you have used B. on a proration basis C. you will be charged for the lowest storage capacity you have used

B. on a proration basis

What does a "Domain" refer to in Amazon SWF? A. A security group in which only tasks inside can communicate with each other B. A special type of worker C. A collection of related Workflows D. The DNS record for the Amazon SWF service

C. A collection of related Workflows

Is there any way to own a direct connection to Amazon Web Services? A. You can create an encrypted tunnel to VPC, but you don't own the connection. B. Yes, it's called Amazon Dedicated Connection. C. No, AWS only allows access from the public Internet. D. Yes, it's called Direct Connect

D. Yes, it's called Direct Connect

What will be the status of the snapshot until the snapshot is complete. A. running B. working C. progressing D. pending

D. pending

What is the default VPC security group limit? A. 500 B. 50 C. 5 D. There is no limit

A. 500

*With new RDS DB instances, automated backups are enabled by default?* * True * False

True

*Which AWS DB platform is most suitable for OLTP?* * ElastiCache * DynamoDB * RDS * Redshift

RDS

*How many S3 buckets can I have per account by default?* * 20 * 10 * 50 * 100

*100*

1. What origin servers are supported by Amazon CloudFront? (Choose 3 answers) A. An Amazon Route 53 Hosted Zone B. An Amazon Simple Storage Service (Amazon S3) bucket C. An HTTP server running on Amazon Elastic Compute Cloud (Amazon EC2) D. An Amazon EC2 Auto Scaling Group E. An HTTP server running on-premises

1. B, C, E. Amazon CloudFront can use an Amazon S3 bucket or any HTTP server, whether or not it is running in Amazon EC2. A Route 53 Hosted Zone is a set of DNS resource records, while an Auto Scaling Group launches or terminates Amazon EC2 instances automatically. Neither can be specified as an origin server for a distribution.

5. How many access keys may an AWS Identity and Access Management (IAM) user have active at one time? A. 0 B. 1 C. 2 D. 3

5. C. IAM permits users to have no more than two active access keys at one time.

You receive a Spot Instance at a bid of $0.05/hr. After 30 minutes, the Spot Price increases to $0.06/hr and your Spot Instance is terminated by AWS. What was the total EC2 compute cost of running your Spot Instance? A. $0.00 B. $0.02 C. $0.03 D. $0.05 E. $0.06

A. $0.00

Can a 'user' be associated with multiple AWS accounts? A. No B. Yes

A. No

*You are hosting a MySQL database on the root volume of an EC2 instance. The database is using a large number of IOPS, and you need to increase the number of IOPS available to it. What should you do?* * Migrate the database to an S3 bucket. * Use Cloud Front to cache the database. * Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes. * Migrate the database to Glacier.

Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes.

*Which of the following data formats does Amazon Athena support?* (Choose 3) * Apache ORC * JSON * XML * Apache Parquet

Apache ORC, JSON, Apache Parquet * Amazon Athena is an interactive query service that makes it easy to analyse data in Amazon S3, using standard SQL commands. It will work with a number of data formats including "JSON", "Apache Parquet", "Apache ORC" amongst others, but "XML" is not a format that is supported.

Amazon S3 provides; A. Unlimited File Size for Objects B. Unlimited Storage C. A great place to run a No SQL database from D. The ability to act as a web server for dynamic content (i.e. can query a database)

B. Unlimited Storage

After an Amazon VPC instance is launched, can I change the VPC security groups it belongs to? A. No. You cannot. B. Yes. You can. C. Only if you are the root user D. Only if the tag "VPC_Change_Group" is true

B. Yes. You can.

Does AWS Direct Connect allow you access to all Availabilities Zones within a Region? A. Depends on the type of connection B. No C. Yes D. Only when there's just one availability zone in a region. If there are more than one, only one availability zone can be accessed directly.

C. Yes

Amazon RDS supports SOAP only through _____. A. HTTP or HTTPS B. TCP/IP C. HTTP D. HTTPS

D. HTTPS

What is the maximum response time for a Business level Premium Support case? A. 30 minutes B. You always get instant responses (within a few seconds). C. 10 minutes D. 1 hour

D. 1 hour

*AWS's NoSQL product offering is known as ________.* * MongoDB * RDS * MySQL * DynamoDB

DynamoDB

*You have a huge amount of data to be ingested. You don't have a very stringent SLA for it. Which product should you use?* * Kinesis Data Streams * Kinesis Data Firehose * Kinesis Data Analytics * S3

Kinesis Data Firehose * Kinesis Data Streams is used for ingesting real-time data, and Kinesis Data Analytics is used for transformation. S3 is used to store the data.

Will my standby RDS instance be in the same Availability Zone as my primary? A. Only for Oracle RDS types B. Yes C. Only if configured at launch D. No

D. No

CloudFront key pairs can be created only by the root account and cannot be created by IAM users. T/F

True

REST or Query requests are HTTP or HTTPS requests that use an HTTP verb (such as GET or POST) and a parameter named Action or Operation that specifies the API you are calling. T/F

True

Amazon EC2 has no Amazon Resource Names (ARNs) because you can't specify a particular Amazon EC2 resource in an IAM policy.

True You use the instance id to specify an EC2 instance

*What is an Amazon VPC?* * Virtual Private Compute * Virtual Public Compute * Virtual Private Cloud * Virtual Public Cloud

*Virtual Private Cloud* VPC stands for Virtual Private Cloud.

*Which AWS CLI command should I use to create a snapshot of an EBS volume?* * aws ec2 deploy-snapshot * aws ec2 create-snapshot * aws ec2 new-snapshot * aws ec2 fresh-snapshot

*aws ec2 create-snapshot*

Will I be alerted when automatic failover occurs? A. Only if SNS configured B. No C. Yes D. Only if Cloudwatch configured

A. Only if SNS configured

Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? (Choose two.) A. Supported on all Amazon EBS volume types B. Snapshots are automatically encrypted C. Available to all instance types D. Existing volumes can be encrypted E. shared volumes can be encrypted

A. Supported on all Amazon EBS volume types B. Snapshots are automatically encrypted

A/An _____ acts as a firewall that controls the traffic allowed to reach one or more instances. A. security group B. ACL C. IAM D. Private IP Addresses

A. security group

You can have 1 subnet stretched across multiple availability zones. A. True B. False

B. False

Which of the following is NOT a valid SNS subscribers? A. Lambda B. SWF C. SQS D. Email E. HTTPS F. SMS

B. SWF

Which is the default region in AWS? A. eu-west-1 B. us-east-1 C. us-east-2 D. ap-southeast-1

B. us-east-1

Amazon Glacier is designed for: Choose 2 answers A. Frequently accessed data B. Active database storage C. Data archives D. Infrequently accessed data E. Cached session data

C. Data archives D. Infrequently accessed data

You are working with a customer who has 10 TB of archival data that they want to migrate to Amazon Glacier. The customer has a 1-Mbps connection to the Internet. Which service or feature provides the fastest method of getting the data into Amazon Glacier? A. Amazon Glacier multipart upload B. AWS Storage Gateway C. VM Import/Export D. AWS Import/Export

D. AWS Import/Export

While creating an EC2 snapshot using the API, which Action should I be using? A. MakeSnapShot B. FreshSnapshot C. DeploySnapshot D. CreateSnapshot

D. CreateSnapshot

Can I delete a snapshot of the root device of an EBS volume used by a registered AMI? A. Only via API B. Only via Console C. Yes D. No

D. No

Is creating a Read Replica of another Read Replica supported? A. Only in VPC B. Yes C. Only in certain regions D. No

D. No

What does specifying the mapping /dev/sdc=none do when launching an EC2 instance? A. Prevents /dev/sdc from creating the instance. B. Prevents /dev/sdc from deleting the instance. C. Set the value of /dev/sdc to 'zero'. D. Prevents /dev/sdc from attaching to the instance.

D. Prevents /dev/sdc from attaching to the instance.

You are a solutions architect who has been asked to do some consulting for a US company that produces re-useable rocket parts. They have a new web application that needs to be built and this application must be stateless. Which three services could you use to achieve this? A. AWS Storage Gateway, Elasticache & ELB B. ELB, Elasticache & RDS C. Cloudwatch, RDS & DynamoDb D. RDS, DynamoDB & Elasticache.

D. RDS, DynamoDB & Elasticache.

What does RRS stand for when talking about S3? A. Redundancy Removal System B. Relational Rights Storage C. Regional Rights Standard D. Reduced Redundancy Storage

D. Reduced Redundancy Storage

A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario? A. SQS guarantees the order of the messages. B. SQS synchronously provides transcoding output. C. SQS checks the health of the worker instances. D. SQS helps to facilitate horizontal scaling of encoding tasks.

D. SQS helps to facilitate horizontal scaling of encoding tasks.

While creating an Amazon RDS DB, your first task is to set up a DB ______ that controls what IP addresses or EC2 instances have access to your DB Instance. A. Security Pool B. Secure Zone C. Security Token Pool D. Security Group

D. Security Group

Can I use Provisioned IOPS with RDS? A. Only Oracle based RDS B. No C. Only with MSSQL based RDS D. Yes for all RDS instances

D. Yes for all RDS instances

For each DB Instance class, what is the maximum size of associated storage capacity?

For most 6TB, SQL Server = up to 16TB

*Amazon's ElastiCache uses which two engines?* * Reddit & Memcrush * Redis & Memory * MyISAM & InnoDB * Redis & Memcached

Redis & Memcached

*An application with a 150 GB relational database runs on an EC2 instance. While the application is used infrequently with small peaks in the morning and evening, what is the MOST cost effective storage type among the options below?* * Amazon EBS provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD * Amazon EBS General Purpose SSD * Amazon EFS

* Amazon EBS General Purpose SSD* Since the database is used infrequently and not throughout the day, and the question mentioned the MOST cost effective storage type, the preferred choice would be EBS General Purpose SSD over EBS provisioned IOPS SSD. ----------------------------------- The minimum volume of Throughput Optimized HDD is 500GB. As per our scenario, we need 150 GB only. Hence, option C: Amazon EBS General Purpose SSD, would be the best choice for cost-effective. NOTE: SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dormant performance attribute is *IOPS*. The question is focusing on a relational DB where we will give importance to Input/output operations per second. Hence gp2 seems to be a good option in this case. Since the question does not mention on any mission-critical low-latency requirement PIOPS is not required. HDD-backed volumes are optimized for large streaming workloads where *throughput* (measured in MiB/s) is a better performance measure than IOPS.

*There is an application which consists of EC2 Instances behind a classic ELB. An EC2 proxy is used for content management to backend instances. The application might not be able to scale properly. Which of the following can be used to scale the proxy and backend instances appropriately?* (Choose 2) * Use Auto Scaling for the proxy servers. * Use Auto Scaling for backend instances. * Replace the Classic ELB with Application ELB. * Use Application ELB for both the frontend and backend instances.

* Use Auto Scaling for the proxy servers.* * Use Auto Scaling for the backend instances.* When you see the requirement for scaling, consider the Auto Scaling service provided by AWS. This can be used to scale both the backend instances and the EC2 servers.

*Which of the following is correct?* * # of Regions > # of Availability Zones > # of Edge Locations * # of Availability Zones > # of Regions > # of Edge Locations * # of Availability Zones > # of Edge Locations > # of Regions * # of Edge Locations > # of Availability Zones > # of Regions

*# of Edge Locations > # of Availability Zones > # of Regions* The number of Edge Locations is greater than the number of Availability Zones, which is greater than the number of Regions.

*What is the minimum file size that I can store on S3?* * 1MB * 0 bytes * 1 byte * 1KB

*0 bytes*

*When you create an EC2 instance, select Detailed Monitoring. At what frequency will the EBS volume automatically send metrics to Amazon CloudWatch?* * 1 * 3 * 10 * 15

*1* When Detailed Monitoring is enabled, the metrics are sent every minute. ----------------------------------- The metrics are sent every minute, so the other options are incorrect.

How many S3 buckets can I have per account by default? * 10 * 100 * 50 * 20

*100*

*What is the maximum size of the CIDR block you can have for a VPC?* * 16 * 32 * 28 * 10

*16* The maximum size of a VPC you can have is /16, which corresponds to 65,536 IP addresses.

*What is the minimum size of an General Purpose SSD EBS Volume?* * 1MB * 1GiB * 1byte * 1GB

*1GiB* SSD volumes must be between 1 GiB - 16 TiB.

*How quickly can objects be restored from Glacier?* * 2 hours * 1 hour * 3-5 hours * 30 minutes

*3-5 hours* You can expect most restore jobs initiated via the Amazon S3 APIs or Management Console to complete in 3-5 hours. Expedited restore is available at a price.

*How many IP addresses are reserved by AWS for internal purposes in a CIDR block that you can't use?* * 5 * 2 * 3 * 4

*5* AWS reserves five IP address for internal purposes, the first four and the last one.

*When you define a CIDR block with an IP address range, you can't use all the IP addresses for its own networking purposes. How many IP addresses does AWS reserve?* * 5 * 2 * 3 * 4

*5* AWS reserves the first four addresses and the last IP address for internal purposes. ----------------------------------- B, C and D are incorrect. AWS reserves the first four addresses and the last IP address of every subnet for internal purposes, so they can't be used by the customers.

*You have an application running in us-west-2 requiring 6 EC2 Instances running at all times. With 3 Availability Zones in the retion viz. us-west-2a, us-west-2b, and us-west-2c, which of the following deployments provides fault tolerance if any Availability Zone in us-west-2 becomes unavailable?* (Choose 2) * 2 EC2 Instances in us-west-2a, 2EC2 Instances in us-west-2b, and 2 EC2 Instances inus-west-2c * 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c * 4 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b, and 2 EC2 Instances in us-west-2c * 6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c * 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and 3 EC2 Instances in us-west-2c

*6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b, and no EC2 Instances in us-west-2c* *3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b, and 3 EC2 Instances in us-west-2c* Option D- US West 2a-6, US West 2b-6, US West 2c-0 If US West 2a goes down we will still have 6 instances in US West 2b. If US West 2b goes down we will still have 6 instances running in US West 2a. If US West 2c goes down we will still have 6 instances running in US West 2a, 6 instances running in US West 2b Option E - US West 2a-3, US West 2b -3, US West 2c-3 If US West 2a goes down we will still have 3 instances running in US West 2b, 3 instances running in US West 2c If US West 2b goes down we will still have 3 instances running in US West 2a, 3 instances running in US West 2c If US West 2c goes down we will still have 3 instances running in US West 2a, 3 instances running in US West 2b ----------------------------------- Option A is incorrect because, even if one AZ becomes unavailable, you would only have 4 instances available. This does not meet the requirements. Option B is incorrect because, in the case of either us-west-2a or us-west-2b becoming unavailable, you would only have 3 instances available. Even this does not meet the specified requirements. Option C is incorrect because, if us-west-2a becomes unavailable, you would only have 4 instances available. This also does not meet the requirements. NOTE: In this scenario we need to have 6 instances running all the time even when 1 AZ is down.

*You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet-facing applications and internal-facting applications accessed only over VPN. Both Internet-facing and internal-facing applications must be able to leverage at least 3 AZs for high availability. How many subnets must you create within your VPC to accomodate these requirements?* * 2 * 3 * 4 * 6

*6* Since each subnet corresponds to one Availability Zone and you need 3 AZs for both the internet and intranet applications, you will need 6 subnets.

*What is the availability of S3-OneZone-IA?* * 99.90% * 100% * 99.50% * 99.99%

*99.50%* OneZone-IA is only stored in one Zone. While it has the same Durability, it may be less available than normal S3 or S3-IA.

*What is the availability of objects stored in S3?* * 99.99% * 100% * 99.90% * 99%

*99.99%*

*For which of the following scenarios should a Solutions Architect consider using ElasticBeanStalk?* (Choose 2) * A Web application using Amazon RDS * An Enterprise Data Warehouse * A long running worker process * A Static website * A management task run once on nightly basis

*A Web application using Amazon RDS.* *A long running worker process.* AWS Documentation clearly mentions that the Elastic Beanstalk component can be used to create Web Server environments and Worker environments.

*You are trying to establish a VPC peering connection with another VPC, and you discover that there seem to be a lot of limitations and rules when it comes to VPC peering. Which of the following is not a VPC peering limitation or rule?* (Choose 2) * A cluster placement group cannot span peered VPCs * You cannot create a VPC peering connection between VPCs with matching or overlapping CIDR blocks. * You cannot have more than one VPC peering connection between the same VPCs at the same time. * You cannot create a VPC peering connection between VPCs in different regions.

*A cluster placement group cannot span peered VPCs.* *You cannot create a VPC peering connection between VPCs in different regions.* Cluster Placement Groups can span VPCs, but not AZs. In Jan 2018 AWS introduced inter-Region VPC Peering.

*What does an AWS Region consist of?* * A console that gives you quick, global picture of your cloud computing environment. * A collection of data centers that is spread evenly around a specific continent. * A collection of databases that can only be accessed from a specific geographic region. * A distinct location within a geographic area designed to provide high availability to specific geography.

*A distinct location within a geographic area designed to provide high availability to specific geography* Each region is separate geographic area. Each region has multiple, isolated locations known as Availability Zones.

*You have been instructed to establish a successful site-to-site VPN connection from your on-premises network to the VPC (Virtual Private Cloud). As an architect, which of the following pre-requisites should you ensure are in place for establishing the site-to-site VPN connection.* (Choose 2) * The main route table to route traffic through a NAT instance * A public IP address on the customer gateway for the on-premises network * A virtual private gateway attached to the VPC * An Elastic IP address to the Virtual Private Gateway

*A public IP address on the customer gateway for the on-premises network* *A virtual private gateway attached to the VPC* A private gateway is the VPN concentrator on the Amazon side of the VPN connection. You create a virtual private gateway and attach it to the VPC from which you want to create the VPN connection. A customer gateway is a physical device or software application on your side of the VPN connection. ----------------------------------- Option A is incorrect since NAT instance is not required to route traffic via the VPN connection. Option D is incorrect the Virtual Private Gateway is managed by AWS.

*Your company is planning on deploying an application which will consist of a web and database tier. The database tier should not be accessible from the Internet. How would you design the networking part of the application?* (Choose 2) * A public subnet for the web tier * A private subnet for the web tier * A public subnet for the database tier * A private subnet for the database tier

*A public subnet for the web tier* *A private subnet for the database tier* ----------------------------------- Option B is incorrect since users will not be able to access the web application if it placed in a private subnet. Option C is incorrect since the question mentions that the database should not be accessible from the internet.

*What is an AWS region?* * A region is an independent data center, located in different countries around the globe. * A region is a geographical area divided into Availability Zones. Each region contains at least two Availability Zones. * A region is a collection of Edge Locations available in specific countries. * A region is a subset of AWS technologies. For example, the Compute region consists of EC2, ECS, Lamda, etc.

*A region is a geographical area divided into Availability Zones.* Each region contains at least two Availability Zones.

*You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?* * Multiple Amazon EBS Volume with snapshots. * A single Amazon Glacier Vault. * A single Amazon S3 bucket. * Multiple instance stores.

*A single Amazon S3 bucket.* Amazon S3 is the perfect storage solution for audio and text files. It is a highly available and durable storage device.

*You are a developer, and you want to create, publish, maintain, monitor, and secure APIs on a massive scale. You don't want to deal with the back-end resources. What service should you use to do this?* * API Gateway * AWS Lambda * AWS Config * AWS CloudTrail

*API Gateway* Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. ----------------------------------- AWS Lambda lets you run code without provisioning or managing servers. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resrouces. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. By using this service, you can record AWS Management Console actions and API Calls.

*You are deploying a new mobile application and want to deploy everything serverless since you don't want to manage any infrastructure. What services should you choose?* * API Gateway, AWS Lambda, and EBS volumes * API Gateway, AWS Lambda, and S3 * API Gateway, AWS Lambda, and EC2 instances * API Gateway, AWS Lambda, and EC2 instances, and EBS

*API Gateway, AWS Lambda, and S3* Open API Gateway, AWS Lambda, and S3 are serverless services. ----------------------------------- EBS volumes and EC2 instances need to be managed manually; therefore, they can't be called serverless.

*A company wants to have a fully managed data store in AWS. It should be a compatible MySQL database, which is an application requirement. Which of the following databases engines can be used for this purpose?* * AWS RDS * AWS Aurora * AWS DynamoDB * AWS Redshift

*AWS Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. *Note:* RDS is a generic service to provide Relational Database service which supports 6 databases engines. They are Aurora, MySQL, MariaDB, PostgreSQL, Oracle and Microsoft SQL server. Our question is to select MySQL compatible database from the options provided. Out of the options listed *Amazon Aurora* is a MySQL- and PostgreSQL-compatible enterprise-class database. Hence Option B is the answer.

*A database is required for a Two-Tier application. The data would go through multiple schema changes. The database needs to be durable, ACID compliant and changes to the database should not result in database downtime. Which of the following is the best option for data storage?* * AWS S3 * AWS Redshift * AWS DynamoDB * AWS Aurora

*AWS Aurora* As per AWS documentation Aurora does support Schema Changes. Amazon Aurora is a MySQL-compatible database that combines the speed and availability of high-end commercial databases with the simplicity and cost- effectiveness of open-source databases. Amazon Aurora has taken a common data definition language (DDL) statement that typically requires hours to complete in MySQL and made it near-instantaneous i.e. .15 sec for a 100 BG table on r3.8x large instance. *Note:* Amazon DynamoDB is schema-less, in that the data items in a table need not to have the same attributes or even the same number of attributes. Hence it is not a solution. In Aurora, when a user issues a DDL statement: The database updates the INFORMATION_SCHEMA system table with the new schema. In addition, the database timestamps the operation, records the old schema into a new system table (Schema Version Table), and propagates this change to read replicas.

*A company wants to create a standard template for deployment of their infrastructure. These would also be used to provision resources in another region during disaster recovery scenarios. Which AWS service can be used in this regard?* * Amazon Simple Workflow Service * AWS Elastic Beanstalk * AWS CloudFormation * AWS OpsWorks

*AWS CloudFormation* AWS CloudFormation gives developers and systems adminstrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fasion. You can use AWS CloudFormation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using using a drag-and-drop interfaces in the AWS CloudFormation Designer.

*You are using the Virginia region as your primary data center for hosting AWS services. You don't use any other region right now, but you want to keep yourself ready for any disaster. During a disaster, you should be able to deploy your infrastructure like your primary region in a matter of minutes. Which AWS service should you use?* * Amazon CloudWatch * Amazon EC2 AMIs with EBS snapshots * Amazon Elastic Beanstalk * AWS CloudFormation

*AWS CloudFormation* AWS CloudFormation gives you an easy way to capture your existing infrastructure as code and then deploy a replica of your existing architecture and infrastructure into a different region in a matter of minutes. ------------------------- Amazon CloudWatch is used for monitoring, and Amazon EC2 AMIs with EBS snapshots will deploy only the EC2 instance. But what about the other AWS services you have deployed? Amazon Elastic Beanstalk can't be used for the entire infrastructure in different regions.

*You work as an architect for a consulting company. The consulting company normally creates the same set of resources for their clients. They want the same way of building templates, which can be used to deploy the resources to the AWS accounts for the various clients. Which of the following service can help fulfill this requirement?* * AWS Elastic Beanstalk * AWS SQS * AWS CloudFormation * AWS SNS

*AWS CloudFormation* AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. ----------------------------------- Option A is invalid because this is good to get a certain set of defined resources up and running. But it cannot be used to duplicate infrastructure as code. Option B is invalid because this is the Simple Queue Service which is used for sending messages. Option D is invalid because this is the Simple Notification service that is used for sending notifications.

*A company has an entire infrastructure hosted on AWS. It wants to create code templates used to provision the same set of resources in another region in case of a disaster in the primary region. Which of the following services can help in this regard?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*AWS CloudFormation* AWS CloudFormation provisions your resources in a safe, repeatble manner, allow you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.

*A company is planning to adopt Insfrastructure as Code (IaC) since the priority from senior management is to achieve as much automation as required. Which of the following components would help them achieve this purpose?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*AWS CloudFormation* This supplements the requirement in the question by allowing consultants to use their architecture diagrams to construct cloudFormation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Service resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. All you have to do is create a template that describes all the AWS resources that you want (Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.

*To audit the API calls made to your account, which AWS service should you be using?* * AWS Systems Manager * AWS Lambda * AWS CloudWatch * AWS CloudTrail

*AWS CloudTrail* AWS CloudTrail is used to audit the API calls. ----------------------------------- AWS Systems Manager gives you visibility and control of your infrastructure on AWS. AWS Lambda lets you run code without provisioning or managing servers. AWS CloudWatch is used to monitor the AWS resources such as EC2 servers; it provides various metrics in terms of CPU, memory, and so on.

*Your company is planning on launching a set of EC2 Instances for hosting their production-based web application. As an architect you have to instruct the operations department on which service they can use for the monitoring purposes. Which of the following would you recommend?* * AWS CloudTrail * AWS CloudWatch * AWS SQS * AWS SNS

*AWS CloudWatch* Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers. ----------------------------------- Option A is incorrect since this is used for API monitoring. Option C is incorrect since this is used to working with messages in the queue. Option D is incorrect since this is used for sending notifications.

*You had a code deployment over the weekend, and somone pushed something into your instance. Now the web server is not accessible from the Internet. You are looking to find out what change has been made in the system, and you want to track it. Which AWS service are you going to use for this?* * AWS CloudTrail * AWS Config * AWS Lambda * AWS CloudWatch

*AWS Config* AWS Config continuously records configuration changes of AWS resources. ------------------------- CloudTrail is a service that logs all API calls, AWS Lambda lets you run code without provisioning or managing servers, and AWS CloudWatch is used to monitor cloud resources in general.

*What is a way of connecting your data center with AWS?* * AWS Direct Connect * Optical fiber * Using an Infiniband cable * Using a popular Internet service from a vendor such as Comcast or AT&T

*AWS Direct Connect* ----------------------------------- Your colocation or MPLS provider may use an optical fiber or Infiniband cable behind the scenes. If you want to connect over the Internet, then you need a VPN.

*Your company authenticates users in a very disconnected network requiring each user to have several username/password combinations for different applications. You've been tasked to consolidate and migrate services to the cloud and reduce the number of usernames and passwords employees need to use. What is your best recommendation?* * AWS Directory Services allows users to sign in with their existing corporate credentials - reducing the need for additional credentials. * Create two Active Directories - one for the cloud and one for on-premises - reducing username/password combinations to two. * Require users to use third-party identity providers to log-in for all services. * Build out Active Directory on EC2 instances to gain more control over user profiles.

*AWS Directory Service allows users to sign in with their existing corporate credentials - reducing the need for additional credentials.* AWS Directory Service enables your end users to use their existing corporate credentials when accessing AWS applications. Once you've been able to consolidate services to AWS, you won't have to create new credentials, you'll be able to let the users use their existing username/password. ----------------------------------- Option B is incorrect, One Active Directory can be used for both on-premises and the cloud; this isn't the best option provided. Option C is incorrect, this won't always reduce the number of username/passwords combniation. Option D is incorrect, this is more effort and requires additional management than using a managed service.

*A company wants to have a NoSQL database hosted on the AWS Cloud, but do not have the necessary staff to manage the underlying infrastructure. Which of the following choices would be ideal for this requirement?* * AWS Aurora * AWS RDS * AWS DynamoDB * AWS Redshift

*AWS DyanamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the adminstratrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisoning, setup and configuration, replication, software patching, or cluster scaling.

*Your company manages an application that currently allows users to upload images to an S3 bucket. These images are picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area where the metadata for these images can be stored. Which of the following wolud be an ideal data store for this?* * AWS Redshift * AWS Glacier * AWS DynamoDB * AWS SQS

*AWS DynamoDB* AWS DynamoDB is the best, light-weight and durable storage option for metadata. ----------------------------------- Option A is incorrect because, this is normall used for petabye based storage. Option B is incorrect because this is used for archive storage. Option D is incorrect because this is used for messaging purposes.

*A company wants to implement a data store AWS. The data store needs to have the following requirements.* *1) Completely managed by AWS* *2) Ability to store JSON objects efficiently* *3) Scale based on demand* *Which of the following would you use as the data store?* * AWS Redshift * AWS DynamoDB * AWS Aurora * AWS Glacier

*AWS DynamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. It is ideal for storing JSON based objects. ----------------------------------- Option A is incorrect since this is normally used to host a data warehousing solution. Option C is incorrect since this is used to host a MySQL database. Option D is incorrect since this is used for archive storage.

*An application sends images to S3. The metadata for these images needs to be saved in persistent storage and is required to be indexed. Which one of the following can be used for the underlying metadata storage?* * AWS Aurora * AWS S3 * AWS DynamoDB * AWS RDS

*AWS DynamoDB* The most efficient storage mechanism for just storing metadata is DynamoDB. DynamoDB is normally used in conjunction with the Simple Storage service. So, after storing the images in S3, you can store metadata in DynamoDB. You can also create secondary indexes for DynamoDB Tables.

*A company has a requirement to implement block level storage. Each storage device will store around 100 GB of data. Which of the following can be used to fulfill this requirement?* * AWS EBS Volumes * AWS S3 * AWS Glacier * AWS EFS

*AWS EBS Volumes* For block level storage, you need to consider EBS Volumes. ----------------------------------- Options B and C are incorrect since they provide object level storage. Option D is incorrect since this is file level storage.

*What options can be used to host an application that uses NGINX and is scalable at any point in time?* (Choose 2) * AWS EC2 * AWS Elastic Beanstalk * AWS SQS * AWS ELB

*AWS EC2* *AWS Elastic Beanstalk* NGINX is an open source software for web serving, reverse proxying, caching, load balancing etc. It complements the load balancing capabilities of Amazon ELB and ALB by adding support for multiple HTTP, HTTP/2, and SSL/TLS services, content-based routing rules, caching, Auto Scaling support, and traffic management policies. NGINX can be hosted on an EC2 instance through a series of clear steps- Launch an EC2 instance through the console. SSH into the instance and use the command yum install -y nginx to install nginx. Also make sure that it is configured to restart automatically after a reboot. It can also be installed with an Elastic Beanstalk service. To enable the NGINX proxy server with your Tomcat application, you must add a configuration file to .ebextensions in the application source bundle that you upload to Elastic Beanstalk.

*Your company is planning on developing a new application. Your development team needs a quick environment setup in AWS using NGINX as the underlying web server environment. Which of the following services can be used to quickly provision such an environment?* (Please select 2 correct options.) * AWS EC2 * AWS Elastic Beanstalk * AWS SQS * AWS ELB

*AWS EC2* *AWS Elastic Beanstalk* NGINX is an open source software for web serving, reverse proxying, caching, load balancing etc. It complements the load balancing capabilities of Amazon ELB and ALB by adding support for multiple HTTP, HTTP/2, and SSL/TLS services, content-based routing rules, caching, Auto Scaling support, and traffic management policies. NGINX can be hosted on an EC2 instance through a series of clear steps- Launch an EC2 instance through console, SSH into the instance and use the command yum install -y nginx to install nginx. Also, make sure that it is configured to restart automatically after a reboot. It can also be installed with an Elastic Beanstalk service. To enable the NGINX proxy server with your Tomcat application, you must add a configuration file to .ebextensions in the application source bundle that you upload to Elastic Beanstalk.

*Your company is big on building container-based applications. Currently they use Kubernetes for their on-premises docker based orchestration. They want to move to AWS and preferably not have to manage the infrastructure for the underlying orchestration service. Which of the followign could be used for this purpose?* * AWS DynamoDB * AWS ECS with Fargate * AWS EC2 with Kubernetes installed * AWS Elastic Beanstalk

*AWS ECS with Fargate* Amazon ECS has two modes: Fargate launch type and EC2 launch type. With Fargate launch type, all you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. ----------------------------------- Option A is incorrect since this is a fully managed NoSQL database. Option C is incorrect since this would add maintenance overhead for the company and the question mentions that the company does not want to manage infrastructure. Option D is incorrect since this is used to deploy applications but will provide a managed orchestration service.

*A company has a requirement for archival of 6TB of data. There is an agreement with the stakeholders for an 8-hour agreed retrieval time. Which of the following can be used as the MOST cost-effective storage option?* * AWS S3 Standard * AWS S3 Infrequent Access * AWS Glacier * AWS EBS Volumes

*AWS Glacier* Amazon Glacier is the perfect solution for this. Since the agreed time frame for retrieval is met at 8 hours, this will be the most cost effective option.

*You are hosting your application on EC2 servers and using an EBS volume to store the data. Your security team needs all the data in the disk to be encrypted. What should you use to achieve this?* * IAM Access Key * AWS Certificate Manager * AWS KMS API * AWS STS

*AWS KMS API* AWS KMS helps to create and control encryption keys used to encrypt your data. ----------------------------------- IAM Access Key is used to secure access to EC2 servers, AWS Certificate Manager is used to generate SSL certificates, and Secure Token Service is used to generate temporary credentials.

*The security policy of an organization requires an application to encrypt data before writing to the disk. Which solution should the organization use to meet this requirement?* * AWS KMS API * AWS Certificate Manager * API Gateway with STS * IAM Access Key

*AWS KMS API* AWS Key Management Service (AWS KMS) is managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others will make it simple to encypt your data with encryption keys that you manage. ----------------------------------- Option B is incorrect - The AWS Certificate Manager can be used to generate SSL certificates to encrypt in transit, but not at rest. Option C is incorrect - It is used for issuing tokens while using the API gateway for traffic in transit. Option D is used for secure access to EC2 Instances.

*A company has a set of EC2 Instances that store critical data on EBS Volumes. The IT security team has now mandated that the data on the disks needs to be encrypted. Which of the following can be used to achieve this purpose?* * AWS KMS API * AWS Certificate Manager * API Gateway with STS * IAM Access Key

*AWS KMS API* AWS Key Management Service (AWS KMS) is managed service that makes it easy to create and control encryption keys used to encrypt your data. AWS KMS is integrated with other AWS service including Amazon Elastic Block Store (Amazon EB), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and other, to make it simple to encrypt your data with encryption keys that you manage. ------------------------------ Option B is incorrect. The AWS Certificate Manager can be used to generate SSL certificates to encrypt traffic in transit, but not at reast. Option C is incorrect. This is used for issuing tokens while using API gateway for traffic in transit. Option D is incorrect. This is used for secure access to EC2 Instances.

*You are starting a delivery service and planning to use 5,000 cars in the first phase. You are going to leverage an AWS IoT solution for collecting all the data from the cars. Which AWS service can you use to ingest all the data from IoT in real time?* * AWS Kinesis * AWS Lambda * AWS API Gateway * AWS OpsWorks

*AWS Kinesis* AWS Kinesis allows you to ingest the data in real time. ------------------------- AWS Lambda lets you run code without provisioning or managing servers. API Gateway is managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*A company is planning on building an application using the services available on AWS. This application will be stateless in nature, and the service must have the ability to scale according to the demand. Which of the following would be an ideal compute service to use in this scenario?* * AWS DynamoDB * AWS Lambda * AWS S3 * AWS SQS

*AWS Lambda* A stateless application is an application that needs no knowledge of previous interactions and stores no session information. Such an example could be an application that, given the same input, provides the same response to any end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g. EC2 instances, AWS Lambda functions).

*A development team wants to deploy a complete serverless application on the AWS Cloud. This application will be invoked by users across the globe. Which of the following services would be an ideal component in such an architecture?* (Choose 2) * AWS Lambda * API Gateway * AWS RDS * AWS EC2

*AWS Lambda* *API Gateway* AWS Lambda is the serverless compute component provided by AWS. One can easily place their running code on this service. And then, the API gateway can be used as an invocation point for the AWS Lambda function.

*A company wants to build a brand new application on the AWS Cloud. They want to ensure that this application follows the Microservices architecture. Which of the following services can be used to build this sort of architecture?* (Choose 3) * AWS Lambda * AWS ECS * AWS API Gateway * AWS SQS

*AWS Lambda* *AWS ECS* *AWS API Gateway* AWS Lambda is a serverless compute service that allows you to build independent services. The Elastic Container service (ECS) can be used to manage containers. The API Gateway is a serverless component for managing access to APIs.

*You want to deploy your applications in AWS, but you don't want to host them on any servers. Which service would you choose for doing this?* (Choose two.) * Amazon ElastiCache * AWS Lambda * Amazon API Gateway * Amazon EC2

*AWS Lambda* *Amazon API Gateway* ----------------------------------- Amazon ElastiCache is use to deploy Redis or Memcached protocol-compliant server nodes in the cloud, and Amazon EC2 is a server.

*You managed the IT users for a large organization that is moving many services to the AWS and you want a seamless way for your employees to log in and use the cloud services. You also want to use AWS Managed Microsoft AD and have been asked if users can access services in the on-premises environment. What do you respond with?* * AWS Manged Microsoft AD requires data synchronization and replication to work properly. * AWS Managed Microsoft AD can only be used for cloud or on-premises environments, not both. * AWS Managed Microsoft AD can be used as the Activity Directory over VPN or Direct Connect. * AWS Managed Microsoft AD is 100% the same as Active Directory running on separate EC2 instance.

*AWS Managed Microsoft AD can be used as the Active Directory over VPN or Direct Connect* Because you want to use AWS Managed Microsoft AD, you want to be certain that your users can use the AWS cloud resources as well as services in your on-premise environment. In order for your company to have connectivity for AWS services, once you implement VPN or Direct Connect your AWS Managed Microsoft AD can be used for both cloud services and on-premises services. ----------------------------------- Option A is incorrect, while data can be sychronized from on-premises to the cloud, it is not required. Option B is incorrect, AWS Managed Microsoft AD can be used for both, it is not one or the other. Option D is incorrect, AWS Managed Microsoft AD being a managed service limits some capabilities versus running Active Directoy by itself on EC2 instances.

*Using which AWS orchestration can you implement Puppet recipes?* * AWS OpsWorks * AWS Elastic Beanstalk * AWS Elastic Load Balancing * Amazon Redshift

*AWS OpsWorks* AWS OpsWork is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. ------------------------- AWS Elastic Beanstalk is a service used for deploying infrastructure that orchestrates various AWS services, including EC2, S3, Simple Notification Service, and so on. Elastic Load Balancing is used to balance the load across multiple EC2 servers, and Amazon Redshift is the data warehouse offering of AWS.

*You have been hired as a consultant for a company to implement their CI/CD processes. They currently use an on-premises deployment of Chef for their configuration management on servers. You need to advise them on what they can use on AWS to leverage their existing capabilities. Which of the following service would you recommend?* * Amazon Simple Workflow Service * AWS Elastic Beanstalk * AWS CloudFormation * AWS OpsWorks

*AWS OpsWorks* AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Check and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks let you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. OpsWorks has three offering, AWS OpsWorks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks. All of the other options are incorrect since the only tool which works effectively with the Chef Configuration management tool is AWS OpsWorks.

*Your company has more than 50 accounts on AWS, and you are finding it extremely difficult to manage the cost for each account manually. You want to centralize the billing and cost management for all the accounts across the company. Which service would you choose for this?* * Trusted Advisor * AWS Organization * Billing Console * Centralized Billing

*AWS Organization* AWS Organization can help you manage all those accounts. ----------------------------------- Trusted advisor gives advice on how to control the cost. It does not help in central billing management. For each account there is a separate billing console; therefore, 50 accounts, there will be 50 billing consoles. How would you centralize that? There is no AWS service called Centralized Billing.

*You have several AWS accounts within your company. Every business unit has its own AWS account, and because of that, you are having extreme difficulty managing all of them. You want to centrally control the use of AWS services down to the API level across multiple accounts. Which AWS service can you use for this?* * AWS Centralized Billing * Trusted Advisor * AWS Organization * Amazon Quicksight

*AWS Organization* Using AWS Organization, you can centrally control the use of AWS services down to the API level across multiple accounts. ------------------------- There is no service known as AWS Centralized Billing. Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. Amazon Quicksight is the business analytics service of AWS.

*A company has a requirement to store 100TB of data to AWS. This data will be exported using AWS Snowball and needs to then reside in a database layer. The database should have the facility to be queried from a business intelligence application. Each item is roughly 500KB in size. Which of the following is an ideal storage mechanism for the underlying data layer?* * AWS DynamoDB * AWS Aurora * AWS RDS * AWS Redshift

*AWS Redshift* For this sheer data size, the ideal storage unit would be AWS Redshift. Amazon Redshift is a fully managed, petabtye-scale data warehouse service in the cloud. You can start with just a few hundred gigabyte of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After the provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today. ----------------------------------- Option A is incorrect because the maximum item size in DynamoDB is 400KB. Option B is incorrect because Aurora supports 64TB of data. Option C is incorrect because we can create MySQL, MariaDB, SQL Server, PostgreSQL, and Oracle RDS DB instances with up to 16 TB of storage.

*Below are the requirements for a data store in AWS:* *a) Fully managed* *b)Integration with existing business intelligence tools* *c) High concurrency workload that generally involves reading and writing all columns for a small number of records at a time* *Which of the following would be an ideal data store for the above requirements?* (Choose 2) * AWS Redshift * AWS DynamoDB * AWS Aurora * AWS S3

*AWS Redshift* *AWS DynamoDB* The question says: a) Fully Managed b) Integration with existing business intelligence tools Therefore AWS Redshift would suit this requirement. c) High concurrency workload that generally involves reading and writing all columns for a small number of records at a time Therefore AWS DynamoDB would suit this requirement ----------------------------------- Option C is incorrect, AWS Aurora is a database and is not suitable for reading and writing small number of records. Option D is incorrect, AWS S3 cannot be integrated with business intelligence tools.

*There is a requirement for 500 messages to be sent and processed in order. Which service can be used in this regard?* * AWS SQS FIFO * AWS SNS * AWS Config * AWS ELB

*AWS SQS FIFO* Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queues. ----------------------------------- NOTE: Yes, SNS is used to send out the messages. SNS is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients-publishers and subscribers-also referred to as producers and consumers. Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel. Subscribers (i.e. web servers, email addresses, Amazon SQS queue, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e. Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic. There is no such thing like maintain the messages order in SNS. In the question, it mentioned that "There is a requirement for 500 messages to be sent and *processed in order*". By SNS all messages will send at the same time to all the subscribers.

*An application needs to have a messaging system in AWS. It is the utmost importance that the order of the messages is preserved and duplicate messages are not sent. Which of the following services can help fulfill this requirement?* * AWS SQS FIFO * AWS SNS * AWS Config * AWS ELB

*AWS SQS FIFO* Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue. *Note:* As per AWS, SQS FIFO queues will ensure delivery of the message only once and it will delivered in a sequential order. (i.e. First in First Out) where as SNS cannot guarantee the delivery of the message only once. As per AWS SNS FAQ. *Q: How many times will a subscriber receive each message?* Although most of the time each message will be delivered to your application exactly once, the *distributed nature of Amazon SNS and transient network conditions could result in occasional, duplicate messages at the subscriber end.* Developers should design their applications such that processing a message more than once does not create any errors or inconsistencies. FIFO FQs state that *High Throughput:* By default, FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second). When you batch 10 messages per operation (maximum), FIFO queues can support up to 3,000 messages per second. To request a limit increase, file a support request. *Exactly-Once Processing:* A message is delivered once and remains available until a consumer processes and deletes it. Duplicates aren't introduced into the queue. *First-In-First-Out Delivery:* The order in which messages are sent and received strictly preserved (i.e. First-In-First-Out). Using SQS FIFO queues will satisfy both the requirements stated in the question. i.e. Duplication of message will not occur and the order of the messages will be preserved.

*A company has a set of Hyper-V and VMware virtual machines. They are now planning on migrating these instances to the AWS Cloud. Which of the following can be used to move these resources to the AWS Cloud?* * DB Migration utility * AWS Server Migration Service * Use AWS Migration Tools * Use AWS Config Tools

*AWS Server Migration Service* AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premsis workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replication of live server volumes, making it easier for you to coordinate large-scale server migration.

*You are designing the following application in AWS. Users will use the application to upload videos and images. The files will then be picked up by a worker process for further processing. Which of the below services should be used in the design of the application.* (Choose 2) * AWS Simple Storage Service for storing the videos and images * AWS Glacier for storing the videos and images * AWS SNS for distributed processing of messages by the worker process * AWS SQS for distributed processing of messages by the worker process

*AWS Simple Storage Service for storing the videos and images* *AWS SQS for distributed processing of messages by the worker proces.* Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. ----------------------------------- Option B is incorrect since this is used for archive storage. Option C is incorrect since this is used as a notification service.

*An application running on EC2 Instances processes sensitive information stored on Amazon S3. This information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 could be a security risk. Which solution will resolve the security concerns?* * Access the data through an Internet Gateway. * Access the data through a VPN connection. * Access the data through a NAT Gateway. * Access the data through a VPC endpoint for Amazon S3.

*Access the data through a VPC endpoint for Amazon S3.* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connection connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other service does not leave the Amazon network. ----------------------------------- Option A is incorrect, an Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. Option B is incorrect, a VPN, or Virtual Private Network, allows you to create a secure connection to another network over the Internet. Option C is incorrect, you can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the internet from initiating a connection with those instances.

*Power User Access allows ________.* * Full Access to all AWS services and resources. * Users to inspect the source code of the AWS platform * Access to all AWS services except the management of groups and users within IAM. * Read Only access to all AWS services and resources.

*Access to all AWS services except the management of groups and users within IAM.*

*An IAM policy contains which of the following?* (Choose two.) * Username * Action * Service name * AZ

*Action* *Service name* A policy is not location specific and is not limited to a user.

*If an administrator who has root access leaves the company, what should you do to protect your account?* (Choose two.) * Add MFA to root * Delete all the IAM accounts * Change the passwords for all the IAM accounts and rotate keys * Delete all the EC2 instances created by the administrator

*Add MFA to root* *Change the passwords for all the IAM accounts and rotate keys* Deleting all the IAM accounts is going to be a bigger painful task. You are going to lose all the users. Similarly, you can't delete all the EC2 instances; they must be running some critical application or something meaningful.

*Your Operations department is using an incident based application hosted on a set of EC2 Instances. These instances are placed behind an AutoScaling Group to ensure the right number of instances are in place to support the application. The Operations department has expressed dissatisfaction with regard to poor application performance at 9:00 AM each day. However, it is also noted that the system performance returns to optimal at 9:45 AM* *What can be done to ensure that this issue gets fixed?* * Create another Dynamic Scaling Policy to ensure that the scaling happens at 9:00 AM. * Add another Auto Scaling group to support the current one. * Change the Cool Down Timers for the existing Auto Scaling Group. * Add a Scheduled Scaling Policy at 8:30 AM.

*Add a Scheduled Scaling Policy at 8:30 AM* Scheduled Scaling can be used to ensure that the capacity is peaked before 9:00 AM each day. Scaling based on schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.

*You have created an S3 bucket for your application and immediately receive more than 10,000 PUT per second. What should you do to ensure optimal performance?* * There is no need to do anything; S3 will automatically handle this. * Create each file in a separate folder. * Use S3 infrequent access. * Add a random prefix to the key names.

*Add a random prefix to the key names.* Also, when you are "putting" a large number of files, upload should be optimized with multipart uploads where on the sending side the original file is split into multiple parts, uploaded in parallel, and on the receiving side the file is composed back to a single object. ----------------------------------- A, B, and C are incorrect. By doing some optimization, if you can get better performance, then why not do it? If you create a separate folder for each file, it will be management nightmare. S3 IA won't give you better performance.

*After setting up a VPC peering connection between VPC and that of your client, the client requests to be able to send traffic between instances in the peered VPCs using private IP addresses. What must you do to make this possible?* * Establish a private peering connection * Add your instance and the client's instance to a Placement Group * Add a route to a Route Table that's associated with your VPC. * Use an IPSec tunnel.

*Add a route to a Route Table that's associated with your VPC.* If a route is added to your Route Table, your client will have access to your instance via private IP address.

*An application consists of the following architecture:* *a) EC2 Instances in a single AZ behind an ELB* *b) A NAT Instance which is used to ensure that instances can download updates from the Internet* *Which of the following can be used to ensure better fault tolerance in this setup?* (Choose 2) * Add more instances in the existing Availability Zone. * Add an Auto Scaling Group to the setup * Add more instances in another Availability Zone. * Add another ELB for more fault tolerance.

*Add an Auto Scaling Group to the setup.* *Add more instances in another Availability Zone.* Adding Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Auto Scaling, your applications gain the following benefits: Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Auto Scaling to use multiple Availability Zones. If one Availability Zones becomes unavailable, Auto Scaling can launch instances in another one to be compensate. Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current traffic demands.

*An infrastructure is being hosted in AWS using the following resources:* *a) A couple of EC2 instances serving a Web-Based application* *b) An Elastic Load Balancer in front of the EC2 instances.* *c) An AWS RDS which has Multi-AZ enabled* *Which of the following can be added to the setup to ensure scalability?* * Add another ELB to the setup. * Add more EC2 Instances to the setup. * Enable Read Replicas for the AWS RDS. * Add an Auto Scaling Group to the setup.

*Add an Auto Scaling Group to the setup.* AWS Auto Scaling enables you to configure automatic scaling for the scalable AWS resources for you application in a matter of minutes. AWS Auto Scaling uses the Auto Scaling and Application Auto Scaling services to configure scaling policies for your scalable AWS resources.

*The application workload changes constantly, and to meet that, you keep on changing the hardware type for the application server. Because of this, you constantly need to update the web server with the new IP address. How can you fix this problem?* * Add a load balancer * Add an IPv6 IP address * Add an EIP to it * Use a reserved EC2 instance

*Add an EIP to it* Even if you reserve the instance, you still need to remap the IP address. Even with IPv6 you need to remap the IP addresses. The load balancer won't help because the load balancer also needs to be remapped with the new IP addresses.

*You are working as an AWS Architecture for a start-up company. They have a production website which is two-tier with web servers in front end & database server in back end. All these database servers are spread across multiple Availability Zones & are stateful instance. You have configured Auto Scaling Group for these servers with minimum of 2 instance & maximum of 6 instance. During scale in of these instances post peak hours, you are observing data loss from these database servers. What feature needs to be configured additionally to avoid data loss & copy data before instance termination?* * Modify cooldown period to complete custom actions before Instance termination. * Add lifecycle hooks to Auto scaling group. * Customise Termination policy to complete data copy before termination. * Suspend Termination process which will avoid data loss.

*Add lifecycle hooks to Auto scaling group.* Adding Lifecycle Hooks to Auto Scaling group instance into wait state before termination. During this wait state, you can perform custom activities to retrieve critical operational data from stateful instance. Default Wait period is 1 hour. ----------------------------------- Option A is incorrect as cooldown period will not help to copy data from instance before termination. Option C is incorrect as Termination policy is used specify which instances to terminate first during scale in, configuring termination policy for the Auto Scaling group will not copy data before instance termination. Option D is incorrect as Suspending Terminate policy will not prevent data loss but will disrupts other process & prevent scale in.

*You currently have the following architecture in AWS:* *a. A couple of EC2 instances located in us-west-2a* *b. The EC2 Instances are launched via an Auto Scaling group.* *c. The EC2 Instances sit behind a Classic ELB.* *Which of the following additional steps should be taken to ensure the above architecture conforms to a well-architected framework?* * Convert the Classic ELB to an Application ELB. * Add an additional Auto Scaling Group. * Add additional EC2 Instances to us-west-2a. * Add or spread existing instances across multiple Availability Zones.

*Add or spread existing instances across multiple Availability Zones.* Balancing resources across Availability Zones is a best practice for well architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet.

*You have the following architecture deployed in AWS:* *a) A set of EC2 Instances which sits behind an ELB* *b) A database hosted in AWS RDS* *Of late, the performance on the database has been slacking due to a high number of read requests. Which of the following can be added to the architecture to alleviate the performance issue?* (Choose 2) * Add read replica to the primary database to offload read traffic. * Use ElastiCache in front of the databse. * Use AWS CloudFront in front of the database. * Use DynamoDB to offload all the reads. Populate the common items in a separate table.

*Add read replica to the primary database to offload read traffic.* *Use ElastiCache in front of the database.* AWS says "Amazon RDS Read Replicas provide enhanced performance and durability for the database (DB) instances. *This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instances for read-heavy database workloads.* You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increase aggregate read throughput. Amazon ElastiCache is an in-memory cache which can be used to cache common read requests. ----------------------------------- Option C is incorrect because, CloudFront is a valuable component of scaling a website, especially for geo-location workloads and queries. And more advanced for given architecture. Option D is incorrect, it will have latency and additional changes as well.

*You're an architect for the company. Your IT admin staff needs access to newly created EC2 Instances for administrative purposes. Which of the following needs to be done to ensure that the IT admin staff can successfully connect via port 22 on to the EC2 Instances.* * Adjust Security Group to permit egress traffic over TCP port 443 from your IP. * Configure the IAM role to permit changes to security group settings. * Modify the instance security group to allow ingress of ICMP packets from your IP. * Adjust the instance's Security Group to permit ingress traffic over port 22. * Apply the most recently released Operating System security patches.

*Adjust the instance's Security Group to permit ingress traffic over port 22.* A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. For connecting via SSH on EC2, you need to ensure that port 22 is open on the security group for EC2 instance. ----------------------------------- Option A is wrong, because port 443 is for HTTPS and not for SSH. Option B is wrong because IAM role is not pertinent to security groups. Option C is wrong because this is relevant to ICMP and not SSH. Option E is wrong because it does not matter what patches are there on the system.

*What level of access does the "root" account have?* * No Access * Read Only Access * Power User Access * Administrator Access

*Administrator Access*

*In AWS Route 53, which of the following are true?* * Alias Records provide a Route 53-specific extension to DNS functionality. * R53 Alias Records allow fast response to AWS initiated environmental changes. * Route53 allows you to create a CNAME at the top node of a DNS namespace (zone apex) * A CNAME record assigns an Alias name to an IP address. * A CNAME record assigns an Alias name to a Canonical Name. * Alias Records can point at any resource with a Canonical Name.

*Alias Records provide a Route 53-specific extension to DNS functionality.* *A CNAME record assigns an Alias name to a Canonical Name.* This is normal DNS capabilities as defined in the RFCs. By design CNAMEs are not intended to point at IP addresses.

*If you encrypt a database running in RDS, what objects are going to be encrypted?* * The entire database * The database backups and snapshot * The database log files * All of the above

*All of the above* When you encrypted a database, everything gets encrypted including the database, backups, logs, read replicas, snapshot, and so on.

*You have a web server and an app server running. You often reboot your app server for maintenance activities. Every time you reboot the app server, you need to update the connect string for the web server since the IP address of the app server changes. How do you fix this issue?* * Allocate an IPv6 IP address to the app server * Allocate an Elastic Network Interface to the app server * Allocate an elastic IP address to the app server * Run a script to change the connection

*Allocate an elastic IP address to the app server* Allocating an IPv6 IP address won't be of any use because whenever the server comes back, it is going to get assigned another new IPv6 IP address. Also, if your VPC doesn't support IPv6 and if you did not select the IPv6 option while creating the instance, you may not be able to allocate one. The Elastic Network Interface helps you add multiple network interfaces but won't get you a static IP address. You can run a script to change the connection, but unfortunately you have to run it every time you are done with any maintenance activities. You can even automate the running of the script, but why add so much complexity when you can solve the problem simply by allocating an EIP?

*You work as an architect for a company. An application is going to be deployed on a set of EC2 instances in a VPC. The Instances will be hosting a web application. You need to design the security group to ensure that users have the ability to connect from the Internet via HTTPS. Which of the following needs to be configured for the security group.?* * Allow Inbound access on port 443 for 0.0.0.0/0 * Allow Outbound access on port 443 for 0.0.0.0/0 * Allow Inbound access on port 80 for 0.0.0.0/0 * Allow Outbound access on port 80 for 0.0.0.0/0

*Allow Inbound access on port 443 for 0.0.0.0/0* A *security group* acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. for the VPC. ----------------------------------- Option B is incorrect since security groups are stateful, you don't need to define the rule for outbound traffic. Option C and D are incorrect since you need to only ensure access for HTTPS, hence you should not configure rules for port 80.

*You have created a VPC with a CIDR block of 200.0.0.0/16. You launched an EC2 instance in the public subnet, and you are hosting your web site from that EC2. You have already configured the security groups correctly. What do you need to do from network ACLs so that the web site is accessible from your home network of 192.168.1.0/24?* * Allow inbound traffic from source 192.168.1.0/24 on port 443. * Allow inbound traffic from 192.168.1.0/24 on port 80 and outbound traffic to destination 192.168.1.0/24 on an ephemeral port. * Allow inbound traffic from 192.168.1.0/24 on port 443 and outbound traffic to destination 192.168.1.0/24 on 443. * Allow inbound traffic from 192.168.1.0/24 on port 80.

*Allow inbound traffic from 192.168.1.0/24 on port 80 and outbound traffic to destination 192.168.1.0/24 on an ephemeral port.* You need to allow both inbound and outbound traffic from and to your network. Since you need to access the web site from your home, you need to provide access to port 80. The web site can in return request any available user port at the home network; thus, you need to allow an ephemeral port. ----------------------------------- A and D are incorrect because they do not have an option for outbound traffic. C is incorrect because the outbound traffic is restricted to port 443.

* You have created a VPC using the VPC wizard with a CIDR block of 100.0.0.0/16. You selected a private subnet and a VPN connection using the VPC wizard and launched an EC2 instance in the private subnet. Now you need to connect to the EC2 instance via SSH. What do you need to connect to the EC2 instance?* * Allow inbound traffic on port 22 on your network. * Allow inbound traffic on ports 80 and 22 to the private subnet. * Connect to the instance on a private subnet using NAT instance. * Create a public subnet and from there connect to the EC2 instance.

*Allow inbound traffic on port 22 on your network.* SSH runs on port 22. There you need to allow inbound access to port 22. ----------------------------------- Since you have already created a VPN while creating the VPC, the VPC is already connected with your network. Therefore, you can reach the private subnet directly from your network. The port at which SSH runs is 22, so you need to provide access to port 22.

*A company is migrating an on-premises 5 TB MySQL database to AWS and expects its database size to increase steadily. Which Amazon RDS engine meets these requirements?* * MySQL * Microsoft SQL Server * Oracle * Amazon Aurora

*Amazon Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed an reliability of high-end commercial databases with the simplicy and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag-usually much lesser than 100 milliseconds after the primary instance has written an update. *Note:* On a MySQL DB instance, avoid tables in your database growing too large. Provisioned storage limits restrict the maximum size of a MySQL table file to 16 TB. However, based on database usage, your Amazon Aurora storage will automatically grow, from the minimum of 10 GB up to 64 TB, in 10 GB increments, with no impact on database performance.

*A company is migrating an on-premise 10 TB MySQL database. With a business requirement that the replica lag be under 100 milliseconds, the company expects this database to quadruple in size. Which Amazon RDS engine meets the above requirements?* * MySQL * Microsoft SQL Server * Oracle * Amazon Aurora

*Amazon Aurora* Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag-usually much less than 100 milliseconds after the primary instance has written an update. The company expects the database to quadruple in size and the business requirement is that replica lag must be kept under 100 milliseconds. Aurora Cluster can grow up to 64 TB in size and replica lag-is less than 100 milliseconds after the primary instance has written an update.

*You are planning to migrate your own on-premise MySQL database to AWS. The size of the database is 12TB. You are expecting that the database size will become three times that by the end of the year. Your business is also looking for a replica lag of less than 120 milliseconds. Which Amazon RDS engine meets these requirements?* * MySQL * Oracle * Amazon Aurora * Microsoft SQL Server

*Amazon Aurora* Amazon Aurora offers the least replication lag since since six copies of the data are mirrored across three AZs. Moreover, Aurora has the largest size (64TB) compared to any other RDS engine. ------------------------- MySQL won't be able to support the 36TB database size in RDS. Oracle and SQL Server don't offer read replica.

*You are planning to run a mission-critical online ordering-processing system on AWS, and to run that application, you need a database. The database must be highly available, with high performing, and you can't lose any data. Which database meets these criteria? * Use an Oracle database hosted in EC2 * Use Amazon Aurora * Use Redshift * Use RDS MySQL

*Amazon Aurora* Amazon Aurora stores six copies of the data across three AZs. It provides five times more performance than RDS MySQL. ------------------------- If you host an oracle database on EC2 servers, it will be much more expensive compared to Amazon Aurora, and you need to manage it manually. Amazon Redshift is a solution for a data warehouse.

*An application requires a highly available relational database with an initial storage capacity of 8 TB. This database will grow by 8GB everyday. To support the expected traffic, at least eight read replicas will be required to handle the database reads. Which of the below options meet these requirements?* * DynamoDB * Amazon S3 * Amazon Aurora * Amazon Redshift

*Amazon Aurora* Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster. As a result, all Aurora Replicas return the same data for query results with minimal replica lag-usually much less than 100 milliseconds after the primary instance has written an update. Replica lag varies depending on the rate of database change. That is, during periods where a large amount of write operations occur for the database, you might see an increase in replica lag. Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, minimal additional work is required to replicate a copy of the data for each Aurora Replica. To increase availability, you can use Aurora Replicas as failover targets. Tha is, if the primary instance fails, an Aurora Replica is promoted to the primary instance. There is a brief interruption during which read and write requests made to the primary instance fail with an exception, and the Aurora Replicas are rebooted. If your Aurora DB cluster doesn't include any Aurora Replicas, then your DB cluster will be unavailable for the duration it takes your DB instance to recover from the failure event. However, promoting an Aurora Replica is much faster than recreating the primary instance. For high-availability scenarios, we recommend that you create one or more Aurora Replicas. These should be the same DB instance class as the primary instance and in different Availability Zones for your Aurora DB cluster. *Note:* You can't create an encrypted Aurora Replica for an unencrypted Aurora DB cluster. You can't create an unencrypted Aurora Replica for an encrypted Aurora DB cluster. For details on how to create an Aurora Replica, see Adding Aurora Replicas to a DB cluster. *Replication with Aurora MySQL* In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL: Two Aurora MySQL DB clusters in different AWS Regions, by creating an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region. Two Aurora MySQL DB cluster in the same region, by using MySQL binary log (binlog) replication. An Amazon RDS MySQL DB instance as the master and an Aurora MySQL DB cluster, by creating an Aurora Read Replica of an Amazon RDS MySQL DB Instance. Typically, this approach is used for migration to Aurora MySQL, rather than for ongoing replication.

*An application needs to have a Data store hosted in AWS. The following requirements are in place for the Data store: a) An initial storage capacity of 8 TB b) The ability to accomodate a database growth of 8 GB per day c) The ability to have 4 Read Replicas Which of the following Data stores would you choose for this requirements?* * DynamoDB * AmazonS3 * Amazon Aurora * SQL Server

*Amazon Aurora* Aurora can have a storage limit of 64TB and can easily accommodate the initial 8TB plus a database growth 8GB/day for nearly a period of 20+ years. It can have up to 15 Aurora Replicas that can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster, no additional work is required to replicate a copy of the data for each Aurora Replica. *Note:* Our db choice need to fulfill 3 criterias. 1. Initial Storage capacity 8 TB 2. Daily db growth of 8GB/day 3. Need 4 Read replicas DynamoDB, along side DynamoDB Accelerator *(DAX)* can support up to 9 read replicas in its primary cluster. However we have to choose the best suitable one from the options listed in the question. We have Aurora also listed under the option which is fully dedicated for read operation sin the cluster. *NOTE:* Yes, the first line of the question has not mentioned anything about the database, but the requirements have a mention of it, and also you were asked about read replicas. ALso, would like to inform you that in real time exam, Amazon asks these type of questions to check your understanding under stress, hence we do try to replicating them to for you to get prepared for the exam. *DynamoDB also fulfills all 3 criteria mentioned above. But when we think about the "Read replicas", Aurora is fully dedicated for read operations in the cluster.* For this question, we have to choose only one option. So Aurora is the best Option here.

*You are responsible for deploying a critical application to AWS. It is required to ensure that the control for this application meet PCI compliance. Also, there is a need to monitor web application logs to identify activity. Which of the following services can be used to fulfill this requirement?* (Choose 2) * Amazon CloudWatch Logs * Amazon VPC Flow Logs * Amazon AWS Config * Amazon CloudTrail

*Amazon CloudWatch Logs* *Amazon CloudTrail* AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS Account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.

*There is an urgent requirement to monitor some database metrics for a database hosted on AWS and send notifications. Which AWS services can be accomplish this?* (Choose 2) * Amazon Simple Email Service * Amazon CloudWatch * Amazon Simple Queue Service * Amazon Route53 * Amazon Simple Notification Service

*Amazon CloudWatch* *Amazon Simple Notification Service* Amazon CloudWatch will be used to monitor the IOPS metrics from the RDS Instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.

*I want to store JSON objects. Which database should I choose?* * Amazon Aurora for MySQL * Oracle hosted on EC2 * Amazon Aurora for PostgreSQL * Amazon DynamoDB

*Amazon DynamoDB* A JSON object needs to be stored in a NoSQL database. Amazon Aurora for MySQL and PostgreSQL and Oracle are relational databases.

*An application with a 150 GB relational database runs on an EC2 Instance. This application will be used frequently with a high database read and write requests. What is the most cost-effective storage type for this application?* * Amazon EBS Provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD * Amazon EBS General Purpose SSD * Amazon EFS

*Amazon EBS Provisioned IOPS SSD* The question is focusing on the most cost effective storage option for the application. Provisioned IOPS (SSD) are used for applications that require high Inputs/Outputs Operations per sec and is mainly used in large databases such as Mongo, Cassandra, Microsoft SQL Server, MySQL, PostgreSQL, Oracle. Throughput optimized HDD although it is cheaper compared to PIOPS, is used for data warehouse where it is designed to work with throughput intensive workloads such as big data, log processing etc. Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency. *Note:* As a Solutions Architect, we need to understand the nature of the application and requirement. The question also says that "An Application with a 150 GB relational database runs on an EC2 Instance. This application will be used frequently with a lot of database reads and writes." It also requires high reads and writes, in order to satisfy the application need, we need to go to the Provisioned IOPS. The question also states that the application will be frequently used for heavy read and write operations. So in that case General Purpose SSD won't be able to handle that workload. Hence Option A seems to be the right choice.

*You have an application for which you are thinking of using EC2 to host the Oracle database. The size of the database is 100GB. Since the application needs operating system access in the database tier, you can't use RDS. The application will be used infrequently, though sometimes it will be used in the morning and during the evening. What is the most cost-effective way to design the storage layer?* * Amazon S3 * Amazon EBS General Purpose SSD * Amazon EBS Provisioned IOPS SSD * Amazon EBS Throughput Optimized HDD

*Amazon EBS Throughput Optimized HDD* Since the application will be used infrequently and the goal is to select a cost-optimized storage option, Throughput Optimized HDD is the best choice. ----------------------------------- A, B and C are incorrect. Amazon S3 is an object store, so it can't be used to host a database. General Purpose SSD and Provisioned IOPS SSD are going to cost a lot more than the Throughput Optimized HDD.

*You want to deploy an application in AWS. The Application comes with an Oracle database that needs to be installed on a separate server. The application requires that certain files be installed on the database server. You are also looking at faster performance for the database. What solution should you choose to deploy the database?* * Amazon RDS for Oracle * Amazon EC2 with magnetic EBS volumes * Amazon EC2 with SSD-based EBS volumes * Migrate Oracle Database to DynamoDB

*Amazon EC2 with SSD-based EBS volumes* RDS does not provide operating system access; therefore, with RDS, you won't be able to install application-specific files on RDS instance. It has to be EC2. Since you are looking for faster performance and SSD will provide better performance than magnetic volumes, you need to choose EC2 and SSD. ------------------------- Since you need the database to be fast, a magnetic-based EBS volume won't provide the performance you are looking for. Oracle Database is a relational database, whereas DynamoDB is a NoSQL database is going to be a Herculean task. In some cases, it may even be possible; you may have to rewrite the entire application.

*In which of the following services can you have root-level access to the operating system?* (Choose two.) * Amazon EC2 * Amazon RDS * Amazon EMR * Amazon DynamoDB

*Amazon EC2* *Amazon EMR* Only Amazon EC2 and EMR provide root-level access. ----------------------------------- Amazon EMR and Amazon DynamoDB are managed services where you don't have root-level access.

*You maintain an application which needs to store files in a file system which has the ability to be mounted on various Linux EC2 Instances. Which of the following would be an ideal storage solution?* * Amazon EBS * Amazon S3 * Amazon EC2 Instance store * Amazon EFS

*Amazon EFS* Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and application running on multiple instances.

*You are running a highly available application in AWS. The business needs a very performant shared file system that can be shared across EC2 servers (web servers). Which AWS service can solve this problem?* * Amazon EFS * Amazon EBS * Amazon EC2 instance store * Amazon S3

*Amazon EFS* Only EFS can be mounted across several EC2 instances at the same time. It provides the shared file system capability. ------------------------- EBS can be mounted with one EC2 instance at any point in time, and it can't be mounted with multiple EC2 instances. An EC2 instance store is the local storage within the EC2 server, which is also known as ephemeral storage. This can't be mounted to any other EC2 server. Amazon S3 is an object store and not a file system and can't be mounted to an EC2 server as a file system.

*Which AWS service allows you to manage Docker containers on a cluster of Amazon EC2 servers?* * Amazon Elastic Container Service * Amazon Elastic Docker Service * Amazon Elastic Container Service for Kubernetes * Amazon Elastic Beanstalk

*Amazon Elastic Container Service* Amazon Elastic Container Service (ECS) is a container management service that allows you to manage Docker containers on a cluster of Amazon EC2 servers. ----------------------------------- There is no service called Amazon Elastic Docker Service. Amazon Elastic Container Service for Kubernetes is used for deploying Kubernetes. Amazon Elastic Beanstalk is used for deploying web applications.

*You are deploying an application to track the GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?* * Amazon Kinesis * AWS Data Pipeline * Amazon AppStream * Amazon Simple Queue Service

*Amazon Kinesis* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before processing can begin.

*Which EC2 offering provides the information needed to launch an instance?* * Spot Fleet * Amazon Machine Image * CloudWatch * Bootstrap Action

*Amazon Machine Image* An AMI provides all the information required to launch an instance. You need to specify an AMI when launching an instance, and you can launch as many instances as you need from the AMI. ------------------------- Using Spot Fleet, you can launch multiple EC2 spot instances, but internally they also need an AMI. CloudWatch can also create the EC2 instances for you, it also needs an AMI. Bootstrap Action is used while launching the instance.

*You have an RDS database that has moderate I/O requirements. Which storage medium would be best to accommodate these requirements?* * Amazon RDS Magnetic Storage * Amazon RDS Cold Storage * Amazon RDS Elastic Storage * Amazon RDS General Purpose (SSD) Storage

*Amazon RDS General Purpose (SSD) Storage* Amazon RDS General Purpose (SSD) Storage would be the most suitable. It offers cost-effective storage that is ideal for a broad range of workloads.

*If you want to run your relational database in the AWS cloud, which service would you choose?* * Amazon DynamoDB * Amazon Redshift * Amazon RDS * Amazon ElastiCache

*Amazon RDS* ----------------------------------- Amazon DynamoDB is a NoSQL offering, Amazon Redshift is a data warehouse offering, and Amazon ElastiCache is used to deploy Redis or Memcached protocol-compliant server nodes in the cloud.

*You are designing a highly scalable and available web application, and you are using EC2 instances along with Auto Scaling to host the web application. You want to store the session state data in such a way that it should not impact Auto Scaling. What service could you use to store the session state data* (Choose three.) * Amazon EC2 * Amazon RDS * Amazon ElastiCache * Amazon DynamoDB

*Amazon RDS* *Amazon ElastiCache* *Amazon DynamoDB* You can choose an Amazon RDS instance, Amazon DynamoDB, or Amazon ElastiCache to store the session state data. ------------------------- If you store the session information in EC2 servers, then you need to wait for all the users to log off before shutting down the instance. In that case, Auto Scaling won't be able to terminate the instance even if only one user is connected. You can use RDS or Dynamo DB or ElastiCache to store the session state information. If you do so, EC2 can be started and stopped by Auto Scaling as per the scaling policies.

*A company is generating large datasets with millions of rows to be summarized column-wise. To build daily reports from these data sets, Business Intelligence tools would be used. Which storage service meets these requirements?* * Amazon Redshift * Amazon RDS * ElastiCache * DynamoDB

*Amazon Redshift* Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with a just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. Columnar storage for database tables is an important factor is optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Amazon Redshift use a block size of 1 MB, which is more efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of the query execution.

*I have to run my analytics, and to optimize I want to store all the data in columnar format. Which database serves my need?* * Amazon Aurora for MySQL * Amazon Redshift * Amazon DynamoDB * Amazon Aurora for Postgres

*Amazon Redshift* Amazon Redshift stores all the data in columnar format. Amazon Aurora for MySQL and PostgreSQL store the database in row format, and Amazon DynamoDB is a NoSQL database.

*A company has an aplication that stores images and thumbnails on S3. The thumbnail needs to be available for download immediately. Additionally, both the images and thumbnail images are not accessed frequently. Which is the most cost-efficient storage option that meets above-mentioned requirements?* * Amazon Glacier with Expedited Retrievals. * Amazon S3 Standard Infrequent Access. * Amazon EFS. * Amazon S3 Standard.

*Amazon S3 Standard Infrequent Access.* Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed. It is more cost effective than Option D (Amazon S3 Standard). If you choose Amazon Glacier with Expedited Retrievals, you defeat the whole purpose of the requirement, because of its increased cost.

*Which two features of Amazon S3 and Glacier provide unmatched durability, availability, and scalability? (Choose two.) * Amazon S3 Standard, Amazon S3 Standard IA, Amazon S3 One Zone IA, and Glacier automatically store objects across a minimum of three AZs. * Amazon S3 Standard, Amazon S3 Standard IA, and Glacier automatically store objects across minimum of three AZs. *Amazon S3 automatically replicates the content of the bucket to a second region for HA. * Amazon S3's cross-region replication automatically replicates every object uploaded to Amazon S3 to a bucket in a different region.

*Amazon S3 Standard, Amazon S3 Standard IA, and Glacier automatically store objects across minimum of three AZs.* Amazon S3's cross-region replication automatically replicates every object uploaded to Amazon S3 to a bucket in a different region. Amazon S3 Standard, Amazon S3 Standard IA, and GLacier automatically store objects across a minimum of three AZs, and by using the cross-region replication feature, you can automatically replicate any object uploaded to a bucket to a different region. ----------------------------------- A and C are incorrect. S3 One Zone IA replicates the data within one AZ but not outside that AZ. By default data never leaves a region. This is a fundamental principle with any AWS service. The customer needs to choose when and how to move the data outside a region.

*A company needs to have its object-based data stored on AWS. The initial size of data would be around 500 GB, with overall growth expected to go into 80TB over the next couple of months. The solution must also be durable. Which of the following would be an ideal storage option to use for such a requirement?* * DynamoDB * Amazon S3 * Amazon Aurora * Amazon Redshift

*Amazon S3* Amazon S3 is object storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3.

*You are designing a storage solution for a publishing company. The publishing company stores lots of documents in Microsoft Word and in PDF format. The solution you are designing should be able to provide document-sharing capabilities so that anyone who is authorized can access the file and versioning capability so that the application can maintain several versions of the same file at any point in time. Which AWS service meets this requirement?* * Amazon S3 * Amazon EFS * Amazon EBS * Amazon RDS

*Amazon S3* Amazon S3 provides sharing as well as version control capabilities. ------------------------- Amazon EFS and EBS are not correct answer because if you decide the store the documents in EFS or EBS, your cost will go up, and you need to write an application on top that is going to provide versioning and sharing capabilities. Though EBS is a shared file system, in this case you are looking at the sharing capabilitiy of the application and not of the file system. Amazon RDS is a relational database offering, and storing documents in a relation database is not the right solution.

*You are building a custom application to process the data for your business needs. You have chosen Kinesis Data Streams to ingest the data. What are the various destinations where your data can go?* (Choose 3) * Amazon S3 * Amazon Elastic File System * Amazon EMR * Amazond Redshift * Amazon Glacier

*Amazon S3* *Amazon EMR* *Amazon Redshift* The destinations for Kinesis Data Streams are services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift Amazon EMR, and AWS Lambda. ------------------------- Amazon EFS is not a Kinesis Data Streams destination, and you can't send files to Amazon Glacier directly.

*You want to be notified for any failure happening in the cloud. Which service would you leverage for receiving the notifications?* * Amazon SNS * Amazon SQS * Amazon CloudWatch * AWS Config

*Amazon SNS* ----------------------------------- Amazon SQS is the queue service; Amazon CloudWatch is used to monitor cloud resources; and AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources.

*You are developing a small application via which users can register for events. As part of the registration process, you want to send a one-time text message confirming registration. Which AWS service should you be using to do this?* * Amazon SNS * Amazon SQS * AWS STS * API Gateway

*Amazon SNS* You can use Amazon SNS to send text messages, or SMS messages, to SMS-enabled devices. You can send a message directly to a phone number. You can also send send a message to multiple phone numbers at the same time by subscribing those phone numbers to a topic and sending your message to the topic. ----------------------------------- Amazon SQS is a fully managed message queuing service. The AWS Security Token Service (STS) is a web service that enables you to request temporary limited-privilege credentials for AWS IAM users or for users who you authenticate via federation. API Gateway is a managed service that is used to create APIs.

*Your company has an application that takes care of uploading, processing and publishing videos posted by users. The current architecture for this application includes the following:* *a) A set of EC2 Instances to transfer user uploaded videos to S3 buckets.* *b) A set of EC2 worker processes to process and publish the videos.* *c) An Auto Scaling Group for the EC2 worker processes* *Which of the following can be added to the architecture to make it more reliable?* * Amazon SQS * Amazon SNS * Amazon CloudFront * Amazon SES

*Amazon SQS* Amazon SQS is used to decouple systems. It can store requests to process videos to be picked up by the worker processes. Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

*An application is going to be developed using AWS. The application needs a storage layer to store important documents. Which of the following option is incorrect to fulfill this requirement?* * Amazon S3 * Amazon EBS * Amazon EFS * Amazon Storage Gateway VTL

*Amazon Storage Gateway VTL* It's used to take the data backups to the cloud. ----------------------------------- *NOTE:* The question is asking about which of the below options is *incorrect* for storing of *important* documents in the cloud, and Option D is correct. The question is not asking about data archival, rather storing. So, Option D is not suited for our requirement.

*In Amazon RDS, who is responsible for patching the database?* * Customer. * Amazon. * In RDS you don't have to patch the database. * RDS does not come under the shared security model.

*Amazon.* RDS does come under a shared security model. Since it is a managed service, the patching of the database is taken care of by Amazon.

*What is Amazon Glacier?* * A highly secure firewall designed to keep everything out. * A tool that allows to "freeze" an EBS volume. * An AWS service designed for long term data archival. *It is a tool used to resurrect deleted EC2 snapshots.

*An AWS service designed for long term data archival.*

*You have suggested moving your company's web servers to AWS, but your supervisor is concerned about cost. Which of the following deployments will give you the most scalable and cost-effective solution?* * A hybrid solution that leverages on-premise resources * An EC2 auto-scaling group that will expand and contract with demand * A solution that's built to run 24/7 at 100% capacity, using a fixed number of T2 Micro instances * None of these options

*An EC2 auto-scaling group that will expand contract with demand.* An Auto-Scaling group of EC2 instances will exactly match the demand placed on your servers, allowing you to pay only for the compute capacity you actually need.

*Your company is designing an app that requires it to store data in DynamoDB and have registered the app with identity providers so users can sign-in using third-parties like Google and Facebook. What must be in place such that the app can obtain temporary credentials to access DynamoDB?* * Multi-factor authentication must be used to access DynamoDB * AWS CloudTrail needs to be enabled to audit usage. * An IAM role allowing the app to have access to DynamoDB * The user must additionally log into the AWS console to gain database access.

*An IAM role allowing the app to have access to DynamoDB* The user will have to assume a role that has permissions to interact with DynamoDB. ----------------------------------- Option A is incorrect, Multi-factor authentication is available, but not required. Option B i incorrect, CloudTrail is recommended for auditing but is not required. Option D is incorrect, A second log-in event to the management console is not required.

*Your development team has created a web application that needs to be tested on VPC. You need to advise the IT admin team on how they should implement the VPC to ensure the application can be accessed from the Internet. Which of the following would be part of the design.* (Choose 3) * An Internet Gateway attached to the VPC * A NAT gateway attached to the VPC * Route table entry added for the Internet gateway * All instances launched with a public IP

*An Internet Gateway attached to the VPC* *Route table entry added for the Internet Gateway* *All instances launched with a public IP* ----------------------------------- Option B is incorrect since this should be used for communication of instances in the private subnet to the Internet.

*Your supervisor asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company's on-premise data center. Which of the following would you include in the architecture?* * An SQS queue as the messaging component between the Instances and servers * An SNS topic as the messaging component between the Instances and servers * An Elastic Load balancer to distribute requests to your EC2 Instance * Route 53 resource records to route requests based on failure

*An SQS queue as the messaging component between the Instances and servers* Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. ----------------------------------- Option B is incorrect since this is a notification service. Option C is incorrect since there is no mention in the question on adding any fault tolerance. Option D is incorrect since there is no mention in the question on adding any failure detection.

*You are planning on hosting a static website on an EC2 Instance. You need to ensure that the environment is highly available and scalable to meet demand. Which of the below aspects can be used to create a highly available environment?* (Choose 3) * An auto scaling group to recover from EC2 instance failures. * Elastic Load Balancer * An SQS queue * Multiple Availability Zones

*An auto scaling group to recover from EC2 instance failures.* *Elastic Load Balancer* *Multiple Availability Zones* ELB which is placed in front of the users which leps in directing the traffic to the EC2 Instances. The EC2 Instances which are placed as part of an AutoScaling Group. And then you have multiple subnets which are mapped to multiple availability zones. ----------------------------------- For a static web site, the SQS is not required to build such an environment. If you have a system such as order processing systems, which has that sort of queuing of requests, then that could be a candidate for using SQS Queues.

*What happens if you delete an IAM role that is associated with a running EC2 instance?* * Any application running on the instance that is using the role will be denied access immediately. * The application continues to use that role until the EC2 server is shut down. * The application will have the access until the session is alive. * The application will continue to have access.

*Any application running on the instance that is using the role will be denied access immediately.* The application will be denied access.

*Your Development team wants to start making use of EC2 Instances to host their Application and Web servers. In the space of automation, they want the Instances to always download the latest version of the Web and Application servers when they are launched. As an architect, what would you recommend for this scenario?* * Ask the Development team to create scripts which can be added to the User Data section when the instance is launched. * Ask the D

*Ask the Development team to create scripts which can be added to the User Data section when the instance is launched.* When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two type of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).

*You are deploying an application on Amazon EC2, which must call AWS APIs. What method should you use to securely pass credentials to the application?* * Pass API credentials to the instance using Instance userdata. * Store API credentials as an object in Amazon S3. * Embed the API credentials into your application. * Assign IAM roles to the EC2 Instances.

*Assign IAM roles to the EC2 Instances* You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. It is not a good practice to use IAM credentials for a production based application. A good practice however, is to use IAM Roles.

*You are in the process of deploying an application on an EC2 instance. The application must call AWS APIs. What is the secured way of passing the credentials to the application?* * Use a DynamoDB to store the API credentials. * Assign IAM roles to the EC2 instance. * Keep API credentials as an object in Amazon S3. * Pass the API credentials to the instance using the instance user data.

*Assign IAM roles to the EC2 instance.* You need to use IAM roles to pass the credentials to the application in a secure way. ----------------------------------- If you store the API credentials in DynamoDB or S3, how will the EC2 instance connect to DynamoDB or S3, which has to be via IAM roles? Passing the API credentials to the instance using the instance user data is not a solution because API credentials may change or you may change EC2 servers.

*Your development team just finished developing an application on AWS. This application is created in .NET and is hosted on an EC2 instance. The application currently accesses a DynamoDB table and is now going to be deployed to production. Which of the following is the ideal and most secure way for the application to access the DynamoDB table?* * Pass API credentials to the instance using instance user data. * Store API credentials as an object in Amazon S3. * Embed the API credentials into your JAR files. * Assign IAM roles to the EC2 instances.

*Assign IAM roles to the EC2 instances.* You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. It is not a best practice to use IAM credentials for any production based application. It is always a good practice to use IAM Roles

*You want to have a static public IP address for your EC2 instance running in a public subnet. How do you achieve this?* * Use a public IP address. * Attach an EIP to the instance. * Use a private IP address. * Attach an elastic load balancer with the EC2 instance and provide the ELB address.

*Attach an EIP to the instance.* EIP is going to provide you with a static IP address. ------------------------- Even if you use the public IP address, it is dynamic, and it is going to change whenever the instance is stopped. You can't use a private IP address since requirement is to use a static public IP address. Moreover, a private IP address is dynamic. Even if you assign a public IP address to ELB and shut down the instance, a new IP address will be issued to the instance. You need an EIP.

*You have an EC2 instance placed inside a subnet. You ahve created the VPC from scratch, and added EC2 Instance to the subnet. It is required to ensure that this EC2 Instance has complete access to the Internet, since it will be used by users on the Internet. Which of the following options would help accomplish this?* * Launch a NAT Gateway and add routes for 0.0.0.0/0 * Attach a VPC Endpoint and add routes for 0.0.0.0/0 * Attach and Internet Gateway and add routes for 0.0.0.0/0 * Deploy NAT Instances in a public subnet and add routes for 0.0.0.0/0

*Attach an Internet Gateway and add routes for 0.0.0.0/0* An Internet Gateway is horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

*You plan on creating a VPC from scratch and launching EC2 Instances in the subnet. What should be done to ensure that the EC2 Instances are accessible from the Internet?* * Attach an Internet Gateway to the VPC and add a route for 0.0.0.0/0 to the Route table. * Attach an NAT Gateway to the VPC and add a route for 0.0.0.0/0 to the Route table. * Attach an NAT Gateway to the VPC and add a route for 0.0.0.0/32 to the Route table. * Attach an Internet Gateway to the VPC and add a route for 0.0.0.0/32 to the Route table.

*Attach an Internet Gateway to the VPC and add a route for 0.0.0.0./0 to the Route table.* When you create a VPC from scratch, there is no default internet gateway attached, hence you need to create and attach one.

*A company has assigned two web server instances to an Elastic Load Balancer inside a custom VPC. However, the instances are not accessible via URL to the elastic load balancer serving the web app data from the EC2 instances. How might you resolve the issue so that your instances are serving the web app data to the public internet?* * Attach an Internet Gateway to the VPC and route it to the subnet. * Add an Elastic IP address to the instance. * Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet. * None of the above.

*Attach an Internet Gateway to the VPC and route it to the subnet.* If the Internet Gateway is not attached to the VPC, which is a prerequisite for the instances to be accessed from the Internet, the instances will not be reachable.

*A data processing application in AWS must pull data from an Internet service. A Solutions Architect is to design a highly available solution to access this data without placing bandwidth constraints on the application traffic. Which solution meets these requirements?* * Launch a NAT gateway and add routes for 0.0.0.0/0 * Attach a VPC endpoint and add routes for 0.0.0.0/0 * Attach an Internet gateway and add routes for 0.0.0.0/0 * Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0

*Attach an Internet gateway and add routes for 0.0.0.0/0* An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic. ----------------------------------- *NOTE:* NAT gateway is also highly available architecture and is used to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. It can only scale up to 45 Gbps. NAT instances bandwidth capability depends upon the instance type. VPC Endpoints are used to enable private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, Network Address Translation (NAT) devices, or firewall proxies. So it cannot be used to connect to internet. *NOTE:* Network Address Translation (NAT) gateway is recommended for the intstances in a private subnet to connect to the internet or other AWS services. As we don't have any instructions for applications that are in private subnet and the question is talking about the entire application traffic rather than the specific instance (inside private subnet), so NAT can't be the answer to this question.

*You have been tasked to create a public subnet for a VPC. What should you do to make sure the subnet is able to communicate to the Internet?* (Choose two.) * Attach a customer gateway. * In the AWS console, right-click the subnet and then select the Make Public option. * Attach an Internet gateway to the VPC. * Create a route in the route table of the subnet allowing a route out of the Internet gateway.

*Attach an Internet gateway to the VPC.* *Create a route in the route table of the subnet allowing a route out of the Internet gateway.* When you connect an Internet gateway to a VPC, it becomes a public subnet, but it must have a route so that traffic can flow in and out of the subnet. ------------------------- A customer gateway is used for a VPN connection, and there is no concept of right-clicking.

*You have created a customer subnet, but you forgot to add a route for Internet connectivy. As a result, all the web server running in that subnet don't have any Internet access. How will you make sure all the web servers can access the Internet?* * Attached a virtual private gateway to the subnet for destination 0.0.0.0/0 * Attached an Internet gateway to the subnet for destination 0.0.0.0/0 * Attached an Internet gateway to the security group of EC2 instance for destination 0.0.0.0/0 * Attache a VPC endpoint to the subnet

*Attached an Internet gateway to the subnet for destination 0.0.0.0/0* You need to attached an Internet gateway so that the subnet can talk with the Internet. ------------------------- A virtual private gateway is used to create a VPN connection. You cannot attach an Internet gateway to an EC2 instance. It has to be at the subnet level. A VPC endpoint is used so S3 or DynamoDB can communicate with Amazon VPC, bypassing the Internet.

*When you create an Auto Scaling mechanism for a server, which two things are mandatory?* (Choose two.) * Elastic Load Balancing * Auto Scaling group * DNS resolution * Launch configuration

*Auto Scaling group* *Launch configuration* The launch configuration and the Auto Scaling group are mandatory.

*You are using multiple-language web sites in AWS, which is served via CloudFront. The language is specified an HTTP request, as shown here: http://abc111xyz.cloudfront.net/main.html?language=en http://abc111xyz.cloudfront.net/main.html?language=es http://abc111xyz.cloudfront.net/main.html?language=fr How should you configure CloudFront to deliver cached data in the correct language?* * Serve dynamic content. * Cache an object at the origin. * Forward cookies to the origin. * Base it on the query string parameter.

*Base it on the query string parameter.* In this example, if the main page for your web site is main.html, the following three requests will cause CloudFront to cache main.html three times, once for each value of the language query string parameter. ----------------------------------- If you serve dynamic content or a cache object at the origin or forward cookies to the origin, then you won't provide the best user experience to your end user.

*An organizational hosts a multi-language website on AWS, which is served using CloudFront. Language is specified in the HTTP request as shown below: http://d11111f8.cloudfront.net/main.html?language=de http://d11111f8.cloudfront.net/main.html?language=en http://d11111f8.cloudfront.net/main.html?language=es How should AWS CloudFront be configured to delivered cache data in the correct language?* * Forward cookies to the origin * Based on query string parameters * Cache objects at the origin * Serve dynamic content

*Based on query string parameters* Since language is specified in the query string parameters. CloudFront should be configured for the same.

*You are about to delete the second snapshot of an EBS volume which had 10 GiB of data at the time when the very first snapshot was taken. 6GiB of that data has changed before the second snapshot was created. An additional 2 GiB of the data have been added before the third and final snapshot was taken. Which of the following statements about that is correct?* * Each EBS volume snapshot is a full backup of the complete data and independent of other snapshots. You can go ahead and delete the second snapshot to save costs. After that, you are charged only 22 GiB of data for the two remaining snapshots. * After deletion, the total storage required for the two remaining snapshots is 12 GiB; 10 GiB for the first and 2 GiB for the last snapshot. * Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. Therefore, you can only delete them in reverse chronological order, i.e. starting with third snapshot and then the second one. * Before deletion, the total storage required for the three snapshots was 18 GiB of which the second one had 6 GiB of data. After the deletion of that second snapshot, you are still charged for storing 18 GiB of data -10 GiB from the very first snapshot and 8 GiB (6 + 2) of data from the last snapshot.

*Before deletion, the total storage required for the three snapshots was 18 GiB of which the second one had 6 GiB of data. After the deletion of that second snapshot, you are still charged for storing 18 GiB of data -10 GiB from the very first snapshot and 8 GiB (6 + 2) of data from the last snapshot.* When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains or references all of the information needed to restore your data (from the moment when the snapshot was take) to a new EBS volume.

*What is the range of CIDR blocks that can be used inside a VPC?* * Between /16 and /28 * Between /16 and /30 * Between /14 and /24 * Between /18 and /24

*Between /16 and /28* The range of CIDR blocks in AWS is between /16 and /28. ------------------------- The allowed block size is between a /16 netmask (65,536 IP addresses) and a /28 netmask (16 IP addresses).

*Amazon's EBS volumes are ________.* * Block based storage * Encrypted by default * Object based storage * Not suitable for databases

*Block based storage* EBS, EFS, and FSx are all storage services base on Block storage.

*A new VPC with CIDR range 10.10.0.0/16 has been setup with a public and private subnet, Internet Gateway and a custom route table have been created and a route has been added with the 'Destination' as '0.0.0.0/0' and the 'Target' with Internet Gateway (igw-id). A new Linux EC2 instance has been launched on the public subnet with the auto-assign public IP option enabled, but when trying to SSH into the machine, the connection is getting failed. What could be the reason?* * Elastic IP is not assigned. * Both the subnets are associated with the main route talbe, no subnet is explicitly associated with the custom route table which has internet gateway route. * Public IP address is not assigned. * None of the above.

*Both the subnets are associated with the main route table, no subnet is explicitly associated with the custom route table which has internet gateway route.* Option B, whenever a subnet is createdby default, it is associated with the main route table. We need to explicitly associate the subnet to the custom route table if different routes are required for main custom route tables. ----------------------------------- Option A. An Elastic IP address is a public IPv4 address with which you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet; for example, to connect to your instance from your local computer. For your problem statement, EC2 is launched with Auto-assign public IP enabled. So, since public IP is available, Elastic IP is not a necessity to connect from internet. Option C, the problem statement clearly state that EC2 is launched with Auto-assign Public IP enabled, so this option cannot be true.

*You are a solutions architect who works with a large digital media company. The company has decided that they want to operate within the Japanese region and they need a bucket called "testbucket" set up immediately to test their web application on. You log in to the AWS console and try to create this bucket in the Japanese region however you are told that the bucket name is already taken. What should you do to resolve this?* * Bucket names are global, not regional. This is a popular bucket name and is already taken. You should choose another bucket name. * Run a WHO IS request on the bucket name and get the registered owners email address. Contact the owner and ask if you can purchase the rights to the bucket. * Raise a ticket with AWS and ask them to release the name "testbucket" to you. * Change your region to Korea and then create the bucket "testbucket".

*Bucket names are global, not regional. * This is a popular bucket name and is already taken. You should choose another bucket name.

*Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels?* * Corporate * Business * Free Tier * Developer * Enterprise

*Business* *Developer* *Enterprise* The correct answers are Enterprise, Business, Developer. The 4th level is basic. ----------------------------------- Remember that Free Tier is a billing rebate not an account type or support level.

*How can you make a cluster of an EC2 instance?* * By creating all the instances within a VPC * By creating all the instances in a public subnet * By creating all the instances in a private subnet * By creating a placement group

*By creating a placement group* You can create the placement group within the VPC or within the private or public subnet.

*You have created a web server in the public subnet, and now anyone can access the web server from the Internet. You want to change this behavior and just have the load balancer talk with the web server and no one else. How do you achieve this?* * By removing the Internet gateway * By adding the load balancer in the route table * By allowing the load balancer access in the NACL of the public subnet * By modifying the security group of the instance and just having the load balancer talk with the web server

*By modifying the security group of the instance and just having the load balancer talk with the web server* By removing the Internet gateway, a web connection via the load balancer won't be able to reach the instance. You can add the route for a load balancer in the route table. NACL can allow or block certain traffic. In this scenario, you won't be able to use NACL.

*How can you get visibility of user activity by recording the API calls made to your account?* * By using Amazon API Gateway * By using Amazon CloudWatch * By using AWS CloudTrail * By using Amazon Inspector

*By using AWS CloudTrail* ----------------------------------- Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudWatch is used to monitor cloud resources. AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources, and Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

*How can you have a shared file system across multiple Amazon EC2 instances?* * By using Amazon S3 * By mounting Elastic Block Storage across multiple Amazon EC2 servers * By using Amazon EFS * By using Amazon Glacier

*By using Amazon EFS* ----------------------------------- Amazon S3 is an object store, Amazon EBS can't be mounted across multiple servers, and Amazon Glacier is an extension of Amazon S3.

*What are the various way you can control access to the data stored in S3?* (Choose all that apply.) * By using IAM policy * By creating ACLs * By encrypting the files in a bucket * By making all the files public * By creating a separate folder for the secure files

*By using IAM policy.* *By creating ACLs* By encrypting the files in the bucket, you can make them secure, but it does not help in controlling the access. By making the files public, you are providing universal access to everyone. Creating a separate folder for secure files won't help because, again, you need to control the access of the separate folder.

*You have created a VPC with two subnets. The web servers are running in a public subnet, and the database server is running in a private subnet. You need to download an operating system patch to update the database server. How you are going to download the patch?* * By attaching the Internet Gateway to the private subnet temporarily * By using a NAT gateway * By using peering to another VPC * By changing the security group of the database server and allowing Internet access

*By using a NAT gateway* The database server is running in a private subnet. Anything running in a private subnet should never face the Internet directly. ----------------------------------- Even if you peer to another VPC, you can't really connect to the Internet without using a NAT instance or a NAT gateway. Even if you change the security group of the database server and allow all incoming traffic, it still won't be able to connect to the Internet because the database server is running in the private subnet and the private subnet is not attached to the Internet gateway.

*How can your VPC talk with DynamoDB directly?* * By using a direct connection * By using a VPN connection * By using a VPN endpoint * By using an instance in the public subnet

*By using a VPN endpoint* Direct Connect and VPN are used to connect your corporate data center to AWS. DynamoDB is a service running in AWS. Even if you use an instance in a public subnet to connect with DynamoDB, it is still going to use the Internet. In this case, you won't be able to connect to DynamoDB, bypassing the Internet.

*You want to explicitly "deny" certain traffic to the instance running in your VPC. How do you achieve this?* * By using a security group * By adding an entry in the route table * By putting the instance in the private subnet * By using a network access control list

*By using a network access control list* By using a security group, you can allow and disallow certain traffic, but you can't explicitly deny traffic since the deny option does not exist for security groups. There is no option for denying particular traffic via a route table. By putting an instance in the private subnet, you are just removing the Internet accessibility of this instance, which is not going to deny any particular traffic.

*You are running a sensitive application in the cloud, and you want to deny everyone access to the EC2 server hosting the application except a few authorized people. How can you do this?* * By using a security group * By removing the Internet gateway from the VPC * By removing all the entries from the route table * By using a network access control list

*By using a network access control list* Only via the the network access control list can you explicitly deny certain traffic to the instance running in your VPC. ------------------------- Using a security group you can't explicitly deny. By removing the Internet gateway from the VPC, you will stop the Internet access from the VPC, but that is not going to deny everyone access to the EC2 server running inside the VPC. If you remove all the routes from the route table, then no one will be able to connect to the instance.

*You want EC2 instances to give access without any username or password to S3 buckets. What is the easiest way of doing this?* * By using a VPC S3 endpoint * By using a signed URL * By using roles * By sharing the keys between S3 and EC2

*By using roles* A VPC endpoint is going to create a path between the EC2 instance and the Amazon S3 bucket. A signed URL won't help EC2 instances from accessing S3 buckets. You cannot share the keys between S3 and EC2.

*Your large scientific organization needs to use a fleet of EC2 instances to perform high-performance, CPU-intensive calculations. Your boss asks you to choose an instance type that would best suit the needs of your organization. Which of the following instance types should you recommend?* * C4 * D2 * R3 * M3

*C4* C-class instances are recommended for high-performance front-end fleets, web servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding. The best answer would be to use a C4 instance.

*You are running your company's web site on AWS. You are using a fleet of EC2 servers along with ELB and Auto Scaling and S3 to host the web site. Your web site hosts a lot of photos and videos as well as PDF files. Every time a user looks at a video or photo or a PDF file, it is served from S3 or from EC2 servers, and sometimes this causes performance problems. What can you do to improve the user experience for the end users?* * Cache all the static content using CloudFront. * Put all the content in S3 One Zone. * Use multiple S3 buckets. Don't put more than 1,000 objects in one S3 bucket. * Add more EC2 instances for beter performance.

*Cache all the static content using CloudFront* In this case, caching the static content with CloudFront is going to provide the best performance and better user experience. ------------------------- S3 One Zone is going to provide less durability. Using the multiple buckets is not going to solve performance problem. Even if you use more EC2 servers, the content will still be served from either S3 or EC2 servers, which is not going to solve the performance problem.

*A website runs on EC2 Instances behind an Application Load Balancer. The instances run in an Auto Scaling Group across multiple Availability Zones and deliver large files that are stored on a shared Amazon EFS file system. The company needs to avoid serving the files from EC2 Instances every time a user requests these digital assets. What should the company do to improve the user experience of their website?* * Move the digital assets to Amazon Glacier. * Cache static content using CloudFront. * Resize the images so that they are smaller. * Use reserved EC2 Instances.

*Cache the static content using CloudFront* Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content such as .html, .css, .js, and image files to your users. CloudFront delivers your content through a worldwide network of data center called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency, CloufFront delivers it immediately. ----------------------------------- Glacier is not used for frequent retrievals. So Option A is not a good solution. Options C & D scenarios will also not help in this situation.

*You have fleet of EC2 instances hosting your application and running on an on-demand model that is billed hourly. For running your application, you have enabled Auto Scaling for scaling up and down per the workload, and while reviewing the Auto Scaling events, you notice that the application is scaling up and down multiple times in the same hour. What would you do to optimize the cost while preserving elasticity at the same time?* (Choose two.) * Use a reserved instance instead of an on-demand instance to optimize cost. * Terminate the longest-running EC2 instance by modifying the Auto Scaling group termination policy. * Change the Auto Scaling group cool-down timers. * Terminate the newest running EC2 instance by modifying the Auto Scaling group termination policy. * Change the CloudWatch alarm period that triggers and Auto Scaling scale-down policy.

*Change the Auto Scaling group cool-down timers.* *Change the CloudWatch alarm period that triggers an Auto Scaling scale-down policy.* Since the EC2 instances are billed on the hourly basis, if the instances are going down a couple of times an hour, you have to pay for a full hour every time one goes down. So, if an instance goes down four times during an hour, you will have to pay for four hours; therefore, you should change the Auto Scaling cool-down time so that the EC2 instance remains up at least for an hour. Similarly, you should change the CloudWatch alarm period so that it does not trigger the Auto Scaling policy before an hour is up. ----------------------------------- A, B, and D are incorrect. In this case, you can't use a reserved instance since you don't know what the optimal number of EC2 instances that you will always be running. B is incorrect because even if you terminate the longest-running EC2 instance, it won't help you to optimize cost. D is incorrect because by terminating the newest running EC2 instance, you won't save additional cost.

*You have been asked to advise on a scaling concern. The client has a elegant solution that works well. As the information base grows they use CloudFormation to spin up another stack made up of an S3 bucket and supporting compute instances. The trigger for creating a new stack is when the PUT rate approaches 100 PUTs per second. the problem is that as the business grows that number of buckets is growing into the hundreds and will soon be in the thousands. You have been asked what can be done to reduce the number of buckets without changing the basic architecture.* * Change the trigger level to around 3000 as S3 can now accommodate much higher PUT and GET levels. * Refine the key hashing to randomise the name Key to achieve the potential of 300 PUTs per second. * Set up multiple accounts so that the per account hard limit on S3 buckets is avoided. * Upgrade all buckets to S3 provisioned IOPS to achieve better performance.

*Change the trigger level to around 3000 as S3 can now accommodate much higher PUT and GET levels.* Until 2018 there was a hard limit on S3 puts of 100 PUTs per second. To achieve this care needed to be taken with the structure of the name Key to ensure parallel processing. As of July 2018 the limit was raised to 3500 and the need for the Key design was basically eliminated. Disk IOPS is not the issue with the problem. The account limit is not the issue with the problem.

*You have a fleet of EC2 instances hosting your application and running on an on-demand model that is billed hourly. For running your application, you have enabled Auto Scaling for scaling up and down per the workload, and while reviewing the Auto Scaling events, you notice that the application is scaling up and down multiple times in the same hour. What would you do to optimize the cost while preserving elasticity at the same time?* (Choose two.) * Use a reserved instance instead of an on-demand instance to optimize cost. * Terminate the longest-running EC2 instance by modifying the Auto Scaling group termination policy. * Change the Auto Scaling group cool-down timers. * Terminate the newest running EC2 instnace by modifying the Auto Scaling group termination policy. * Change the CloudWatch alarm period that triggers the Auto Scaling scale-down policy.

*Change to Auto Scaling group cool-down timers.* *Change the CloudWatch alarm period that triggers an Auto Scaling scale-down policy.* Since the EC2 instances are billed on an hourly basis, if the instances are going down a couple of times an hour, you have to pay for a full hour every time one goes down. So, if an instance goes down four times during an hour, you will pay for four hours; therefore, you should change the Auto Scaling cool-down time so that the EC2 instance remains up at least for an hour. Similarly, you should change the CloudWatch alarm period so that it does not trigger the Auto Scaling policy before an hour is up. ------------------------- In this case, you can't use a reserved instance since you don't know what the optimal number of EC2 instances is that you will always be running. B is incorrect because even if you terminate the longest-running EC2 instance, it won't help you optimize cost. D is incorrect because by terminating the newest running EC2 instance you won't save additional cost.

*Currently, you're helping design and architect a highly available application. After building the initial environment, you discover that a part of your application does not work correctly until port 443 is added to the security group. After Adding port 443 to the appropriate security group, how much time will it take before the changes are applied and the application begins working correctly?* * Generally, it takes 2-5 minutes in order for the rules to propagate. * Immediately after a reboot of the EC2 Instances belong to that security group. * Changes apply instantly to the security group, and the application should be able to respond to 443 requests. * It will take 60 seconds for the rules to apply to all Availability Zones within the region.

*Changes apply instantly to the security group, and the application should be able to respond to 443 requests.* Some systems for setting up firewalls let you filter out source ports. Security groups let you filter only on destination ports. When you add or remove rule, they automatically applied to all instances associated with the security group.

*Which two automation platforms are supported by AWS OpsWorks?* * Chef * Puppet * Slack * Ansible

*Chef* *Puppet* AWS OpsWorks is a configuration management service instances of Chef and Puppet. AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate, which is a suite of automation service that hosts Chef Automate, which is a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management. ------------------------- OpsWorks does not support Slack or Ansible.

*What are the two configuration management services that AWS OpsWorks support?* (Choose 2) * Chef * Ansible * Puppet * Java

*Chef* *Puppet* AWS OpsWorks support Chef and Puppet.

*You know that you need 24 CPUs for your production server. You also know that your compute capacity is going to remain fixed until next year, so you need to keep the production server up and running during that time. What pricing option would you go with?* * Choose the spot instance * Choose the on-demand instance * Choose the three-year reserved instance * Choose the one-year reserved instance

*Choose the one-year reserved instance* You won't choose a spot instance because the spot instance can be taken away at any time by giving notice. On-demand won't give you the best pricing since you know you will be running the server all the time for next year. Since you know the computation requirement is only for one year, you should not go with a three-year reserved instance. Rather, you should go for a one-year reserved instance to get the maximum benefit.

*A consulting firm repeatedly builds large architectures for their customers using AWS resources from several AWS service including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC. The consultants have architecture diagrams for each of their architectures, and you are frustrated that they cannot use them to automatically create their resources. Which service should provide immediate benefits to the organizations?* * AWS Beanstalk * AWS CloudFormation * AWS CodeBuild * AWS CodeDeploy

*Cloud Formation* AWS CloudFormation: This supplements the requirement in the question and enables consultants to use their architecture diagram to construct CloudFormation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Service resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation take care of provisioning and configuring those resources for you. When you are building a large architecture repeatedly, you can use the cloud formation template so that you create or modify an existing AWS CloudFormation template. A template describes all of your resources and their properties. When you use that template to create an AWS CloudFormation stack, AWS CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack. By using AWS CloudFormation, you easily manage a collection of resources as a single unit, whenever working with more number of AWS resources as a single unit whenever workign with more number of AWS resources together, cloud formation is the best option. ----------------------------------- AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js etc. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. In question mentioned that *A consulting firm repeatedly builds large architectures for their customers using AWS resources from several AWS service including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC.*

*You have deployed all your applications in the California region, and you are planning to use the Virginia region for DR purposes. You have heard about infrastructure as code and are planning to use that for deploying all the applications in the DR region. Which AWS service will help you achieve this?* * CodeDeploy * Elastic Beanstalk * OpsWorks * CloudFormation

*CloudFormation* AWS provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all region and accounts. The text file where all the information is stored is also known as infrastructure as code. ----------------------------------- AWS CodeDeploy is a service that automates software deployments to a variety of compute services including Amazon EC2, AWS Lambda, and instances running on-premises. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*Which of the following options allows users to have secure access to private files located in S3?* (Choose 3) * CloudFront Signed URLs * CloudFront Origin Access Identity * Public S3 buckets * CloudFront Signed Cookies

*CloudFront Signed URLs* *CloudFront Origin Access Identity* *CloudFront Signed Cookies* There are three options in the question which can be used to secure access to files stored in S3 and therefore can be considered correct. Signed URLs and Signed Cookies are different ways to ensure that users attempting access to files in an S3 bucket can be authorised. One method generates URLs and the other generates special cookies but they both require the creation of an application and policy to generate and control these items. An Origin Access Identity on the other hand, is a virtual user identity that is used to give the CloudFront distribution permission to fetch a private object from an S3 bucket. Public S3 buckets should never be used unless you are using the bucket to host a public website and therefore this is an incorrect option.

*One of your developers has made an API call that has changed the state of your resource that was running fine earlier. Your security team wants to do an audit now. How do you track who created the mistake?* * CloudWatch * CloudTrail * SNS * STS

*CloudTrail* CloudTrail allows you to monitor all the API calls. ------------------------- Cloudwatch does not monitor API calls, SNS is a notification service, and STS is used for gaining temporary credentials.

*You have configured a rule that whenever the CPU utilization of your EC2 goes up, Auto Scaling is going to start a new server for you. Which tool is Auto Scaling using to monitor the CPU utilization?* * CloudWatch metrics. * Output of the top command. * The ELB health check metric. * It depends on the operating system. Auto Scaling uses the OS-native tool to capture the CPU utilization.

*CloudWatch metrics.* Auto Scaling relies on the CloudWatch metrics to find the CPU utilization. Using the top command or the native OS tools, you should be able to identify the CPU utilization, but Auto Scaling does not use that.

*You have implemented AWS Cognito services to require users to sign in and sign up to your app through social identity providers like Facebook, Google, etc. Your marketing department want users to try out the app anonymously and thinks the current long-in requirement is excessive and will reduce demand for products and services offered through the app. What can you offer the marketing department in this regard?* * It's too much of a security risk to allow unauthenticated users access to the app. * Cognito Identity supports guest users for the ability to enter the app and have limited access. * A second version of the app will need to be offered for unauthenticated users. * This is possible only if we remove the authentication from everyone.

*Cognito Identy supports guest users for the ability to enter the app and have limited access.* Amazon Cognito Identity Pools can support unauthenticated identities by providing a unique identifier, and AWS credentials for users who do not authenticate and with an identity provider. Unauthenticated users can be associated with a role that has limited access to resources as compared to a role for authenticated users. ----------------------------------- Option A is incorrect, Cognito will all unauthenticated users without being a security risk. Option C is incorrect, this is not necessary as unauthenticated users are allowed using Cognito. Option D is incorrect, Cognito supports both authenticated and unauthenticated users.

*You want to run a mapreduce job (a part of the big data workload) for a noncritical task. Your main goal is to process it in the most cost-effective way. The task is throughput sensitive but not at all mission critical and can take a longer time. Which type of storage would you choose?* * Throughput Optimized HDD (st1) * Cold HDD (sc1) * General-Purpose SSD (gp2) * Provisioned IOPS (io1)

*Cold HDD (sc1)* Since the workload is not critical and you want to process it in the most cost-effective way, you should choose Cold HDD. Though the workload is throughput sensitive, it is not critical and is low priority; therefore, you should not choose st1. gp2 and io1 are more expensive than other options like st1.

*An application hosted on EC2 Instances has its promotional campaign due to start in 2 weeks. There is a mandate from the management to ensure that no performance problems are encountered due to traffic growth during this time. Which of the following must be done to the Auto Scaling Group to ensure this requirement can be fulfilled?* * Configure Step scaling for the Auto Scaling Group. * Configure Dynamic Scaling and use Target tracking scaling Policy. * Configure Scheduled scaling for the Auto Scaling Group. * Configure Static scaling for the Auto Scaling Group.

*Configure Dynamic Scaling and use Target tracking scaling Policy.* If you are scaling is based on a metric, which is an utilization metric that increases or decreases proportionally to the number of instances in the Auto Scaling group, we recommend that you use a target tracking scaling policy instead. In Target tracking scaling policies you select a predefined metric or configure a customized metric, and set a target value. EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. Scheduled scaling works better when you can predict the load changes and also when you know how low you need to run. Here in our scenario we just know that there will be a heavy traffic during the campaign period (period is not specified) but not sure about the actual traffic. Don't have any history to predict it either.

*An application is hosted on EC2 Instances. There is a promotional campaign due to start in two weeks for the application. There is a mandate from the management to ensure that no performance problems are encountered due to traffic growth during this time. What action must be taken on the Auto Scaling Group to ensure this requirement can be fulfilled?* * Configure step scaling for the Auto Scaling Group. * Configure Dynamic scaling for the Auto Scaling Group. * Configure Scheduled scaling for the Auto Scaling Group. * Configure Static scaling for the Auto Scaling Group.

*Configure Dynamic scaling for the Auto Scaling Group.* If you are scaling based on a metric that is a utilization metric that increases or decreases proportionally to the number of instances in the Auto Scaling group, we recommend that you use a target tracking scaling policy instead. In Target tracking scaling policies you select a predefined metric or configure a customized metric, and set a target value. EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. Scheduled scaling works better when you can predict the load changes and also when you know how long you need to run. Here in our scenario we just know that there will be a heavy traffic during the campaign period (period is not specified) but not sure about the actual traffic. Don't have any history to predict it either. ----------------------------------- *NOTE:* In this particular question, *Dynamic Scaling* is most appropriate solution than *scheduled Scaling* In the question we are mentioning that a marketing campaign will start in 2 weeks. We haven't mentioned that how long it is going to run. So if we go for Scheduled scaling we don't know how long we are going to run. So we cannot specify the Start time or End time. More over scheduled scaling works better when you can predict the load changes. Here in our scenario we just know that there will be a heavy traffic during the campaign period but not sure about the actual traffic. Don't have any history to predict it either. *But if we go for Dynamic Scaling and use Target tracking scaling Policy type*, it Increases or decreases? The current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home - you select a temperature and the thermostat does the rest.

*There is a requirement for an iSCSI device and the legacy application needs local storage. Which of the following can be used to meet the demands of the application?* * Configure the Simple Storage Service. * Configure Storage Gateway Cached volume. * Configure Storage Gateway Stored volume. * Configure Amazon Glacier.

*Configure Storage Gateway Stored Volume.* If you need low-latency access to your entire dataset, first configure you on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. S3 and Glacier are not used for this purpose. Volume gateway provides an iSCSI target, which enables you to create volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The volume gateway runs in either a cached or stored mode. * In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cached for low-latency access. * In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.

*An architecture consists of the following:* *a) An active/passive infrastructure hosted in AWS* *b) Both infrastructures comprise ELB, Auto Scaling, and EC2 resources* *How should Route 53 be configured to ensure proper failover in case the primary infrastructure were to go down?* * Configure a primary routing policy. * Configure a weighted routing policy. * Configure a Multi-Answer routing policy. * Configure a failover routing policy.

*Configure a failover routing policy.* You can create an active-passive failover configuration by using failover records. Create a primary and secondary failover record that have the same name and type, and associate a health check with each. Various Route 53 routing policies are as follows: *Simple routing policy* - Use for a single resource that performs a given function for the domain, for example a web server that serves content for the example.com website. *Failover routing policy* - Use when you want to configure the active-passive failover. *Geolocation routing policy* - Use when you want to route traffic based on the location of your users. *Geoproximity routing policy* - Used when you want to route traffic based on the location of your resources and optionally, shift traffic from resources in one location to resources in another. *Latency routing policy* - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. *Multivalue answer routing policy* - Use when you want to Route 53 to respond to DNS queries with up to eight healhty records selected at random. *Weighted routing policy* - Use to route traffic to multiple resources in proportions that you specify.

*You are running your application on a fleet of EC2 instances across three AZs in the California region. You are also using elastic load balancing along with Auto Scaling to scale up and down your fleet of EC2 servers. While doing the benchmarks before deploying the application, you have found out that if the CPU utilization is around 60 percent, your application performs the best. How can you make sure that your EC2 servers are always running with 60 percent CPU utilization?* * Configure simple scaling with a step policy in Auto Scaling. * Limit the CPU utilization to 60 percent at the EC2 server level. * Configure a target tracking scaling policy. * Since the CPU load is dynamically generated by the application, you can't control it.

*Configure a target tracking scaling policy.* A target tracking scaling policy can keep the average aggregate CPU utilization on your Auto Scaling group at 60 percent. ----------------------------------- A simple scaling policy with a step allows a target based on multiple metrics. In this case, you need to target just one metric. If you limit the CPU utilization to 60 percent at the EC2 server level, you are adding additional maintenance overhead.

*You want to host an application on EC2 and want to use the EBS volume for storing the data. One of the requirements is to encrypt the data at rest. What is the cheapest way to achieve this?* * Make sure when the data is written to the application that it is encrypted. * Configure encryption when creating the EBS volume. That way, data will be encrypted at rest. * Copy the data from EBS to S3 and encrypt the S3 buckets. * Use a third-party tool to encrypt the data.

*Configure encryption when creating the EBS volume. That way, data will be encrypted at rest.* You can configure the encryption option for EBS volumes while creating them. When data is written on an encrypted volume, it will be encrypted. ------------------------------ The requirement is that the encryption of data needs to happen during rest, and when the application is writing the data, it is not at rest. If you copy the data from EBS to S3, then the data residing in S3 will be encrypted. The use case is to encrypt the data residing in the EBS volume. Since EBS provides the option to encrypt the data, why spend money on a third-party tool?

*A company is planning to use AWS Simple Storage Service for hosting their project documents. At the end of the project, the documents need to be moved to archival storage. Which of the following implementation steps would ensure the documents are managed accordingly?* * Adding a bucket policy on the S3 bucket * Configuring lifecycle configuration rules on the S3 bucket * Creating an IAM policy for the S3 bucket * Enabling CORS on the S3 bucket

*Configuring lifecycle configuration rules on the S3 bucket* Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions - In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf.

*Development teams in your organization use S3 buckets to store log files for various application hosted in AWS development environments. The developers intend to keep the log for a month for troubleshooting purposes, and subsequently purge the logs. What feature will enable this requirement?* * Adding a bucket policy on the S3 bucket. * Configuring lifecycle configuration rules on the S3 bucket. * Creating an IAM policy for the S3 bucket. * Enabling CORS on the S3 bucket.

*Configuring lifecycle configuration rules on the S3 bucket.* Lifecycle configuration enables you to specify the LIfecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for AMazon S3 to apply to a group of objects. These actions can be classified as follows: *Transition actions* - In which you define when object transition to another storage class. For Example, you may choose to transition objects to STANDARD_IA (IA, for infrequent class) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration actions* - In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf. ----------------------------------- Option D is for Sharing resources between regions.

*A company is running three production server reserved EC2 Instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems?* * Consider using On-demand instances instead of Reserved EC2 instances. * Consider not using a Multi-AZ RDS deployment database. * Consider using Spot instances instead of Reserved EC2 instances. * Consider removing the Elastic Load Balancer.

*Consider not using a Multi-AZ RDS deployment for the development database.* Multi-AZ databases are better for production environments rather than for development environments, so you can reduce costs by not using these for development environments. Amazon RDS Multi-AZ deployments provide enahanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instances and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your Db Instance remains teh same after a failover, your application can resume database operation without the need for manual adminstrative intervention. *Note:* Mission Critical system refers to production Instances and Databases. However, if you notice, they have a Multi-AZ RDS on Development environment also, which is not necessary. Because management always concerned about production environment shoud be perfect. In order to reduce the cost, we can disable the Multi-AZ RDS for Development environment and keep it only for Production environment.

*A company is planning on using the AWS Redshift service. The Redshift service and data on it would be used continuously for the next 3 years as per the current business plan. Which of the following would be the most cost-effective solution in this scenario?* * Consider using On-demand instances for the Redshift Cluster * Enable Automated backup * Consider using Reserved Instances for the Redshift Cluster * Consider not using a cluster for the Redshift nodes

*Consider using Reserved Instances for the Redshift Cluster.* If you intend to keep your Amazon Redshift cluster running continuously for a prolonged period, you should consider purchasing reserved node offering. These offering provide significant savings over on-demand pricing, but they require you to reserve compute nodes and commit to paying for those nodes for either a one-year or three-year duration.

*Instances in your private subnet hosted in AWS, need access to important documents in S3. Due to the confidential nature of these documents, you have to ensure that this traffic does not traverse through the internet. As an architect, how would you implement this solution?* * Consider using a VPC Endpoint. * Consider using an EC2 Endpoint. * Move the instance to a public subnet. * Create a VPN connection and access the S3 resources from the EC2 Instance.

*Consider using a VPC Endpoint.* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other services do not leave the Amazon network.

*You have a requirement to host a web based application. You need to enable high availability for the application, so you create an Elastic Load Balancer and place the EC2 Instances behind the Elastic Load Balancer. You need to ensure that users only access the application via the DNS name of the load balancer. How would you design the network part of the application?* (Choose 2) * Create 2 public subnets for the Elastic Load Balancer * Create 2 private subnets for the Elastic Load Balancer * Create 2 public subnets for the backend instances * Create 2 private subnets for the backend instances

*Create 2 public subnets for the Elastic Load Balancer* *Create 2 private subnets for the backend instances* You must create public subnets in the same Availability Zones as the private subnets that are used by your private instances. Then associate these public subnets to the internet-facing load balancer. ----------------------------------- Option B is incorrect since the ELB needs to be placed in the public subnet to allow access from the Internet. Option C is incorrect based on security issues. Private subnet gives us the better secuirty from the attacks.

*A CloudFront distribution is being used to distribute content from an S3 bucket. It is required that only a particular set of users get access to certain content. How can this be accomplished?* * Create IAM Users for each user and then provide access to the S3 bucket content. * Create IAM Groups for each set of users and then provide access to the S3 bucket content. * Create CloudFront signed URLs and then distribute these URLs to the users. * Use IAM Policies for the underlying S3 buckets to restrict content.

*Create CloudFront signed URLs and then distribute thse URLs to the users.* Many companies that distribut content via the internet want to restrict access to documents, business data, and media streams, or content that is intended for selected users. For example, users who have paid a fee, to securely serve this private content using CloudFront, you can do the following... - Require that your users access your pviate content by using a special CloudFront signed URLs or signed cookies. - Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.

*Your company has a set of EC2 Instances hosted on the AWS Cloud. As an architect you have been told to ensure that if the status of any instances is related to a failure, then the instances are automatically restarted. How can you achieve this in the MOST efficient way possible?* * Create CloudWatch alarms that stop and start the isntance based off the status check alarms. * Write a script that queries the EC2 API for each instance status check. * Write a script that periodically shuts down and starts instances based on certain stats. * Implement a third-party monitoring tool.

*Create CloudWatch alarms that stop and start the instance based off a status check alarms* Use Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs. All other options are possible, but would just be extra maintenance overhead.

*You have a set of EC2 Instances that support an application. They are currently hosted in the US Region. In the event of a disaster, you need a way to ensure that you can quickly provision the resources n another region. How could this be accomplished?* (Choose 2) * Copy the underlying EBS Volumes to the destination region. * Create EBS Snapshots and then copy them to the destination region. * Create AMIs for the underlying instances. * Copy the metadata for the EC2 Instances to S3.

*Create EBS Snapshots and then copy them to the destination region.* *Create AMIs for the underlying instances.* AMIs can be used to create a snapshot or template of the underlying instance. You can copy the AMI to another region. You can also make snapshots of the volumes and then copy them to the destination region.

*You are working as an architect in your organization. You have peered VPC A as a requestor and VPC B as accepter and both VPCs can communicate with each other. Now you want resources in both the VPCs to reach out to the internet but anyone on the internet should not be able to reach resources within both the VPCs. Which of the following statements is true?* * Create a NAT Gateway on Requestor VPC (VPC A) and configure a route in Route table with NAT Gateway. VPC B can route to internet through VPC A NAT Gateway. * Create an Internet Gateway on Requestor VPC (VPC A) and configure a route in Route table with Internet Gateway. VPC B can route to internet through VPC A Internet Gateway. * Create a NAT Gateways on both VPCs and configure routes in respective route tables with NAT Gateways. * Create a NAT instance on Requestor VPC (VPC A). VPC B can route to internet through VPC A NAT Instance.

*Create NAT Gateways on both VPCs and configure routes in respective route tables with NAT Gateways.* ----------------------------------- For Option A, when NAT Gateway and configured for VPC A, the resources within VPC A can reach out to the internet. But, VPC B resources cannot reach to the internet through NAT Gateway created in VPC A although both VPCs are peering. This situation would cause transitive routing which is not supported in AWS routing. For Option B, Internet Gateways are for two way traffic. But the requirement is only for resources to reach out to internet, inbound traffic from internet should not be allowed. So Internet Gateway is not correct choice. For Option D, similar to Option A, this situation would cause transitive peering and hence not supported. NOTE: AWS recommends using NAT Gateway over NAT instance.

*Currently a company makes use of the EBS snapshots to back up their EBS Volumes. As a part of the business continuity requirement, these snapshots need to be made available in another region. How can this be achieved?* * Directly create the snapshot in the other region. * Create Snapshot and copy the snapshot to a new region. * Copy the snapshot to an S3 bucket and then enable Cross-Region Replication for the bucket. * Copy the EBS Snapshot to an EC2 instance in another region.

*Create Snapshot and copy the snapshot to a new region.* A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create a new volumes in the same region. For more information, follow the link on Restoring an Amazon EBS Volume from a Snapshot below. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. ----------------------------------- Option C is incorrect. Because, the snapshots which we are taking from the EBS are stored in AWS managed S3. We don't have the option to see the snapshot in S3. Hence, option C can't be the correct answer.

*You are developing a new mobile application which is expected to be used by thousands of customers. You are considering storing user preferences in AWS, and need to store to save the same. Each data item is expected to be 20KB in size. The solution needs to be cost-effective, highly available, scalable and secure. How would you design the layer?* * Create a new AWS MySQL RDS instance and store the user data there. * Create a DynamoDB table with the required Read and Write capacity and use it as the data layer. * Use Amazon Glacier to store the data. * Use an Amazon Redshift Cluster for managing the user preferences.

*Create a DynamoDB table with the required Read and Write capacity and use it as the data layer.* In this case, since data item is 20KB and given the fact that DynamoDB is an ideal data layer for storing user preferences, this would be an ideal choice. Also, DynamoDB is a highly scalable and available service.

*A company currently hosts its architecture in the US region. They now need to duplicate this architecture to the Europe region and extend the application hosted on this architecture to the new region. In order to ensure that users across the globe get the same seamless experience from either setup, what among the following needs to be done?* * Create a Classic Elastic Load Balancer setup to route traffic to both locations. * Create a weighted Route 53 policy to route the policy based on the weightage for each location. * Create an Application Elastic Load Balancer setup to route traffic to both locations. * Create a Geolocation Route 53 Policy to route the traffic based on the location.

*Create a Geolocation Route 53 Policy to route the traffic based on the location.* Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.

*Currently, you have a NAT Gateway defined for your private instances. You need to make the NAT Gateway highly available. How can this be accomplished?* * Create another NAT Gateway and place is behind an ELB. * Create a NAT Gateway in another Availability Zone. * Create a NAT Gateway in another region. * Use Auto Scaling groups to scale the NAT Gateway.

*Create a NAT Gateway in another Availability Zone.* If you have resources in multiple Availability Zones and they share one NAT Gateway, in the event that the NAT Gateway's Availability Zone is down, resources in other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT Gateway in each Availability Zone and configure your routing to ensure that resources use the NAT Gateway in the same Availability Zone.

*An EC2 instance in private subnet needs access on S3 bucket placed in the same region as that of the EC2 instance. The EC2 instance needs to upload and download bigger files to S3 bucket frequently. As an AWS solutions architect what quick and cost-effective solution would you suggest to your customers. You need to consider the fact that the EC2 instances being present in a private subnet, the customers do not want their data to be exposed over the internet.* * Place the S3 bucket in other public subnet of the same region and create VPC peering connection to this private subnet where the EC2 instance is placed. The traffic to upload and download files will go through secure Amazons private network. *The quick and cost effective solution would be, create an IAM role having access over S3 service & assign it to the EC2 instance. * Create a VPC endpoint for S3,, Use your route tables to control which instances can access resources in Amazon S3 via the endpoint. The traffic to upload and download files will go through the Amazon private network. * Private subnet can always access S3 bucket/service through the NAT Gateways or NAT instances, so there is no need for additional setup.

*Create a VPC endpoint for S3, Use your route tables to control which instances can access resources in Amazon S3 via the endpoint. The traffic to upload and download files will go through the Amazon private network.* This is the correction option, to be able to access S3 services placed in the same region as that of the VPC having EC2 instance present in the Private Subnet. You can create a VPC endpoint and update the route entry of the route table associated with the private subnet. This is a quick solution as well as cost effective as it will use the Amazons own private network. Hence, it won't expose the data over the internet. ----------------------------------- Option A is incorrect because the S3 service is region specific not AZ's specific, as the statement talks about pla will have a cost associng the S3 bucket in Public Subnet. Option B is incorrect, as this is indeed a quick solution but cost expensive as the EC2 instances from private or public subnet will communicate with the S3 services over its endpoint. And when endpoint is sued it uses the internet for download and upload and hence exposing the data over the internet. Besides, the number of requests will have a cost associated with it. Option D is incorrect, as this is certainly not a default setup unless we create a NAT Gateway or Instance. Even if they are there it's a cost expensive and exposes the data over the internet.

*A company has an on-premises infrastructure which they want to extend to the AWS Cloud. There is a need to esure that the communication across both environments is possible over the Internet when initiated from on-premises. What would you create in this case to fulfill this requirement?* * Create a VPC peering connection between the on-premises and AWS Environment. * Create an AWS Direct connection between the on-premises and the AWS Environment. * Create a VPN connection between the on-premises and AWS Environment. * Create a Virtual private gateway connection between the on-premises and the AWS Environment.

*Create a VPN connection between the on-premises and the AWS Environment.* One can create a Virtual Private connection to establish communication across both environments over the Internet. A VPC VPN Connection utilizes IPSec to establish encrypte network connectivity between your intranet and Amazon VPC over the Internet. VPN Connection can be configured in minutes and are a good soultion if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. ----------------------------------- Option A is invalid because a VPC peering connection, is a networking connectiong between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is not used for connection between on-premises environment and AWS. Option D is invalid because, A virtual private gateway is the Amazon VPC side of the VPN connection. For the communication to take place between the on-premise servers to AWS EC2 instances with the VPC, we need to setup the customer gateway at the on-premise location. *Note:* The question says that "There is a need to ensure that the communication across both environments is possible *over the Internet". AWS Direct Connect does not involve the Internet, instead; it uses dedicated, private network connection between your intranet and Amazon VPC.

*A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a default VPC private subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database must only be accessible to web servers in a public subnet. Which solution meets these requirements without other running applications?* (Choose 2) * Create a network ACL on the Web Server's subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0 * Create a Web Server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the Web Servers. * Create a DB Server security group that allows MySQL port 3306 inbound and specify the source as the Web Server security group. * Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for Web Servers and deny all outbound traffic. * Create a DB Server security groups that allows HTTPS port 443 inbound and specify the source as a Web Server security group.

*Create a Web Server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the Web Servers.* *Create a DB Server security group that allows MySQL port 3306 inbound and specify the source as the Web Server security group.* 1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443. 2) And then, you need to ensure that traffic can flow from the database server to the web server via the database security group. ----------------------------------- Option A and D are invalid answers. Network ACL's are stateless. So we need to set rules for both inbound and outbound traffic for network ACL's. Option E is also invalid because to communicate with the MySQL servers we need to allow traffic to flow through port 3306.

*Your company is looking at decreasing the amount of time it takes to build servers which are deployed as EC2 instances. These intances always have the same type of software installed as per the security standards. As an architect what would you recommend in decreasing the server build time.* * Look at creating snapshots of EBS Volumes * Create the same master copy of the EBS volume * Create a base AMI * Create a base profile

*Create a base AMI* An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify a source AMI when you launch an instance. You can launch multiple instances from a single AMi when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations. ----------------------------------- Option A and B are incorrect since these cannot be used to create a master copy of the instance. Option D is incorrect because creating a profile will not assist.

*You work as an architect for a company. An application is going to be deployed on a set of EC2 instances in a private subnet of VPC. You need to ensure that IT administrators can securely administer the instances in the private subnet. How can you accomplish this?* * Create a NAT gateway, ensure SSH access is provided to the NAT gateway. Access the Instances via the NAT gateway. * Create a NAT instance in a public subnet, ensure SSH access is provided to the NAT instance. Access the Instances via the NAT instance. * Create a bastion host in the private subnet. Make IT admin staff use this as a jump server to the backend instances. * Create a bastion host in the public subnet. Make IT admin staff use this as a jump server to the backend instances.

*Create a bastion host in the public subnet. Make IT admin staff use this as a jump server to the backend instances.* A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC) ----------------------------------- Option A and B are invalid because you would not route access via the NAT instance or the NAT gateway. Option C is incorrect since the bastion host needs to be in the public subnet.

*A company plans on deploying a batch processing application in AWS. Which of the following is an ideal way to host this application?* (Choose 2) * Copy the batch processing application to an ECS Container. * Create a docker image of your batch processing application. * Deploy the image as an Amazon ECS task. * Deploy the container behind the ELB.

*Create a docker image of your batch processing application.* *Deploy the image as an Amazon ECS tasks.* Docker containers are particularly suited for batch job workloads. Batch jobs are often short lived and embarrasingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, wuch as an Amazon ECS tasks.

*You are creating a new website which will both be public facing and private to users who have signed in. Due to the sensitive nature of the website you need to enable 2 factor authentication for when users sign in. Users will type in their user name and password and then the website will need to text the users a 4 digit code to their mobile devices using SMS. The user will then enter this code and will be able to sign in successfully. You need to do this as quickly as possible. Which AWS service will allow you to build this as quickly as possible.* * Create a new pool in Cognito and set the MFA to required when creating the pool. * Create a new group in identity access management and assign the users of the website to the new group. Edit the group settings to require MFA so that the users in the group will inherit the permissions automatically. * Enable multi factor authentication in identity Access Management for your root account. Set up an SNS trigger to SMS you the code for when the users sign in. * Design a MFA solution using a combination of API Gateway, Lambda, DynamoDB and SNS.

*Create a new pool in Cognito and set the MFA to required when creating the pool.* When creating the pool you can set MFA as Required. This will also enable use of Advanced security features which require MFA to function.

*An application hosted in AWS allows users to upload video to an S3 bucket. A user is required to be given access to upload some video for a week based on the profile. How can this be accomplished in the best way possible?* * Create an IAM bucket policy to provide access for a week's duration. * Create a pre-signed URL for each profile which will last for a week's duration. * Create an S3 bucket policy to provide access for a week's duration. * Create an IAM role to provide access for a week's duration.

*Create a pre-signed URL for each profile which will last for a week's duration.* Pre-signed URL's are the perfect solution when you want to give temporary access to users for S3 buckets. So, whenever a new profile is created, you can create a pre-signed URL to ensure that the URL lasts for a week and allows users to upload the required objects.

You current setup in AWS consists of the following architecture: 2 public subnets, one subnet which has web servers accessed by users across the Internet and another subnet for the database server. Which of the following changes to the architecture adds a better security boundary to the resources hosted in this setup? * Consider moving the web server to a prviate subnet. * Create a private subnet and move the database server to a private subnet. * Consider moving both the web and database servers to a private subnet. * Consider creating private subnet and adding a NAT Instance to that subnet.

*Create a private subnet and move the database server to a private subnet.* The ideal setup is to host the web server in the public subnet so that it can be accessed by users on the Internet. The database server can be hosted in the private subnet.

*Your team has developed an application and now needs to deploy that application onto an EC2 Instance. This application interacts with a DynamoDB table. Which of the following is the correct and MOST SECURE way to ensure that the application interacts with the DynamoDB table.* * Create a role which has the necessary permissions and can be assumed by the EC2 instance. * Use the API credentials from an EC2 instance. Ensure the environment variables are updated with the API access keys. * Use the API credentials from a bastion host. Make the application on the EC2 Instance send requests via the bastion host. * Use the API credentials from a NAT Instance. Make the application on the EC2 Instance send requests via the NAT Instance

*Create a role which has the necessary permissions and can be assumed by the EC2 instance.* IAM roles are designed in such a way so that your applications can securely make API requests from your instances, without requiring you to manage the secuirty credentials that the application use. ----------------------------------- Option B, C, and D are invalid because it is not secure to use API credentials from any EC2 instance. The API credentials can be tampered with and hence is not the idea secure way to make API calls.

*You are hosting a web site AWS. The web site has two components: the web server that is hosted on the EC2 server in a public subnet and the database server that is hosted on RDS in a public subnet. Your user must use SSL to connect to the web servers, and the web server should be able to connect to the database server. What should you do in this scenario?* (Choose two.) * Create a security group for the web server. The security group should allow HTTPS traffic on port 443 for inbound traffic from 0.0.0.0/0 (anywhere). * Create a security group for the database server. The security group should allow HTTPS traffic from port 443 for inbound traffic from the web server security group. * Create an ACL for the web server's subnet, which should allow HTTPS on port 443 from 0.0.0.0/0 for inbound traffic and deny all outgoing traffic. * Create an ACL for the database server's subnet, which should allow 1521 inbound from a web server as inbound traffic and deny all outgoing traffic. * Create a security group for the database server. The security group should allow traffic on the 1521 TCP port from the web server security group.

*Create a security group for the web server. The security group should allow HTTPS traffic on port 443 for inbound traffic from 0.0.0.0/0 (anywhere).* *Create a security group for the database server. The security group should allow traffic on the 1521 TCP port from the web server security group.* By using security group, you can define the inbound and outbound traffic for a web server. Anyone should be able to connect to your web server from anywhere; thus, you need to allow 0.0.0.0/0. The web server is running on port 443; thus, you need to allow that port. On the database server, only the web server should be given access, and the port where the database operates is 1521. ------------------------- This question has two aspects. First, all users should be able to connect to the web server via SSL mode, and second the web server should be able to connect to the database server. Since both for the EC2 instance and for RDS the security groups is used to allow the traffic, you should discard the options that have the option of ACL since it is used at the subnet level and not at the instance level. Therefore, C and D are incorrect answers for the first scenario. In addition, the web server should be able to connect to the database servers, which means the port of the database server should be open to the web server. Therefore, B is incorrect since specifying from which port the traffic would come and ot the port of the database that needs to be open.

*You work for a company that has a set of EC2 Instances. There is an internal requirement to create another instance in another availability zone. One of the EBS volumes from the current instance needs to be moved from one of the older instances to the new instance. How can you achieve this?* * Detach the volume and attach to an EC2 instance in another AZ. * Create a new volume in the other AZ and specify the current volumes as the source. * Create a snapshot of the volume and then create a volume from the snapshot in the other AZ. * Create a new volume in the AZ and do a disk copy of contents from one volume to another.

*Create a snapshot of the volume and then create a volume from the snapshot in the other AZ.* In order for a volume to be available in another availability zone, you need to first create a snapshot from the volume. Then in the snapshot from creating a volume from the snapshot, you can specify the new availability zone accordingly. ----------------------------------- Option A is invalid, because the Instance and Volume have to be in the same AZ in order for it to be attached to the instance. Option B is invalid, because there is no way to specify a volume as source. Option D is invalid, because the Diskcopy would just be a tedious process.

*You are running your EC2 instance on the us-west 1 AZ. You are using an EBS volume along with the EC2 server to store the data. You need to move this instance to us-west-2. What is the best way to move all the data?* * Unmount or detach the EBS volume from us-west-1 and mount it to the new EC2 instance in us-west-2. * Create a snapshot of the volume in us-west-1 and create a volume from the snapshot in us-west-2 and then mount it to the EC2 server. * Copy all the data from EBS running on us-west-1 to S3 and then create a new EBS volume in us-west-2 and restore the data from S3 into it. * Create a new volume in us-west-2 and copy everything from us-west-1 to us-west-2 using the disk copy.

*Create a snapshot of the volume in us-west-1 and create a volume from the snapshot in us-west-2 and then mount it to the EC2 server.* The EBS volumes are AZ specific. Therefore, you need to create a snapshot of the volume and then use the snapshot to create a volume in a different AZ. ------------------------- The EBS volume can't be mounted across different AZs. Taking a snapshot is going to copy all the data, so you don't have to copy all the data manually or via disk copy.

*A company is planning on hosting an application in AWS. The application will consist of a web latey and database layer. Both will be hosted in a default VPC. The web server is created in a public subnet and the MySQL database in a private subnet. All subnets are created with the default ACL setting. Following are the key requirements: a) The web server must be accessible only to customers on an SSL connection. b) The database should only be accessible to web servers in a public subnet. Which solution meets these requirements without impacting other running applications?* (Choose 2) * Create a network ACL on the web server's subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0. * Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group. * Create a network ACL on the DB subnet, allow MYSQL port 3306 inbound for web servers and deny all outbound traffic.

*Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.* * Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group.* 1) Option B: To ensure that secure traffic can flow into your web server from anywhere, you need to allow inbound security at 443. 2) Option C: This is to ensure that traffic can flow from the database server to the web server via the database security group.

*Your company is planning on using Route 53 as the DNS provider. There is a need to ensure that the company's domain name points to an existing CloudFront distribution. How can this be achieved?* * Create an Alias record which points to the CloudFront distribution. * Create a host record which points to the CloudFront distribution. * Create a CNAME record which points to the CloudFront distribution. * Create a Non-Alias Record which points to the CloudFront distribution.

*Create an Alias record which points to the CloudFront distribution.* While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to the DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to a CloudFront distribution, and Elastic Beanstalk environment, an ELB Classic, Application, or Network Load Balancer, and Amazon S3 bucket that is configured as a static website, or another Route 53 record in the same hosted zone. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value. *Note:* Route 53 uses 'Alias Name' to connect to the CloudFront, reason Alias Record is a Route 53 extension to DNS. Also, alias record is similar to CNAME record, but the main difference is - you can crate alias record for both root domain & sub-domain, whereas CNAME record can be creatd only to the sub-domain.

*You are working as a consultant for a start-up firm. They have developed a web application for employee to enable them file sharing with external vendors securely. They created an Auto Scaling group for Web servers which require two m4, Large EC2 instances running at all time & scaling up to maximum twelve instances. Post deploying this application, huge rise in billing is observed. Due to limited budget, CTO has requested your advice to optimise usage of instance in Auto Scaling groups. What will be the best solution to reduce cost without any performance impact?* *Create an Auto Scaling group within t2. micro On-Demand instances. * Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 0. Above On-Demand base, select 100% of On-Demand instance & 0% of Spot Instance. * Create an Auto Scaling group with all Spot Instance. * Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 2. Above On-Demand base, select 20% of On-Demand instances & 80% of Spot Instance.

*Create an Auto Scaling group with a mix of On-Demand & Spot Instance. Select On-Demand base as 2. Above On-Demand base, select 20% of On-Demand instance & 80% of Spot Instance.* Auto Scaling group supports a mix of On-Demand & Spot instance which help to design cost optimised solution without any performance impact. You can choose percentage of On-Demand & Spot instance based upon application requirements. With Option D, Auto Scaling group will have initial 2 instance as On-Demand instance while remaining instance will be launched in a ratio of 20% On-Demand instance & 80% Spot Instance. ----------------------------------- Option A is incorrect as although with t2. micro, there would be reduction in cost, but it will impact performance of application. Option B is incorrect as there would not be any cost reduction with all On Demand instance. Option C is incorrect as although wthis will reduce cost, all spot instance in an auto scaling group may cause inconsistencies in application & lead to poor performance.

*Your company has a set of applications that make use of Docker containers used by the Development team. There is a need to move these containers to AWS. Which of the following methods could be used to setup these Docker containers in a separate environment in AWS?* * Create EC2 Instances, install Docker and then upload the containers. * Create EC2 Container registries, install Docker and then upload the containers. * Create an Elastic Beanstalk environment with the necessary Docker containers. * Create EBS Optimized EC2 Instances, install Docker and then upload the containers.

*Create an Elastic Beanstalk environment with the necessary Docker containers.* Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can chose your own platform, programming language, and any application dependencies (such as a package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. ----------------------------------- Option A could be partially correct as we need to install docker on EC2 instance. In addition to this, you need to create an ECS Task definition which details the docker image that we need to use for containers and how many containers to be used as well as the resource allocation for each container. But with Option C, we have the added advantage that, if a Docker container running in an Elastic Beanstalk environment crashes or is skilled for any reason, Elastic Beanstalk restarts it automatically. In the question we have been asked about the best method to setup docker containers, hench Option C seems to be more appropriate.

*You are running an application on EC2 instances, and you want to add a new functionality to your application. To add the functionality, your EC2 instance needs to write data in an S3 bucket. Your EC2 instance is already running, and you can't stop/reboot/terminate it to add the new functionality. How will you achieve this?* (Choose two.) * Create a new user who has access to EC2 and S3. * Launch a new EC2 instance with an IAM role that can access the S3 bucket. * Create an IAM role that allows write access to S3 buckets. * Attach the IAM role that allows write access to S3 buckets to the running EC2 instance.

*Create an IAM role that allows write access to S3 buckets.* *Attach the IAM role that allows write access to S3 buckets to the running EC2 instance.* ------------------------- Since the application needs to write to an S3 bucket, if you create a new user who has access to EC2 and S3, it won't help. You can't launch new EC2 servers because the question clearly says that you can't stop or terminate the existing EC2 server.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should Route 53 be configured to ensure the custom domain is made to point to the load balancer?* (Choose 2) * Create an A record pointing to the IP address of the load balancer. * Create a CNAME record pointing to the load balancer DNS name. * Create an alias record for a CNAME record to the load balancer DNS name. * Ensure that a hosted zone is in place.

*Create an alias record for a CNAME record to the load balancer DNS name.* *Ensure that a hosted zone is in place.* While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to an AWS resource such as a CloudFront distribution or an Amazon S3 bucket. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value. - An alternate domain name for a CloudFront distribution - Route 53 responds as if the query had asked for the CloudFront distribution by using the CloudFront domain name, such as d111111abcdef8.cloudfront.net - An Elastic Beanstalk environment - Route 53 responds to each query with one or more IP addresses for the environment. - An ELB load balancer - Route 53 responds to each query with one or more IP addresses for the load balancer. - An Amazon S3 bucket that is configured as a static website - Route 53 responds to each query with one IP address for the Amazon S3 bucket. Option D is correct. Hosted Zone - is a container for records, and records contain information about how you want to route traffic for a specific domain, such as an example.com, and its subdomains (vpc.example.com, elb.example.com). A hosted zone and the corresponding domain have the same name, and we have 2 types of hosted zones; Public Hosted Zone - contain records that specify how you want to route traffic on the internet. Private Hosted Zone - contain records that specify how you want to route traffic in an Amazon VPC. ----------------------------------- Option A and B are incorrect since you need to use ALIAS names for this.

*You are building an internal training system to train your employees. You are planning to host all the training videos on S3. You are also planning to use CloudFront to serve the videos to employees. Using CloudFront, how do you make sure that videos can be accessed only via CloudFront and not publicly directly from S3?* * Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI. * Create an IAM user for CloudFront and grant access to the objects in your S3 bucket to that IAM user. * Create a security group for CloudFront and provide that security group to S3. * Encrypt the object in S3.

*Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI.* By creating an origin access identity, you can access the objects in your S3 via CloudFront. ------------------------- Creating an IAM user won't guarantee that the object is accessed only via CloudFront. You can use a security group for EC2 and ELB along with CloudFront but not for S3. Encryption does not change the way an object can be accessed.

*You are a developer at a fast growing start up. Until now, you have used the root account to log in to the AWS console. However, as you have taken on more staff, you will now need to stop sharing the root account to prevent accidental damage to your AWS infrastructure. What should you do so that everyone can access the AWS resources they need to do their jobs?* (Choose 2) * Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the credentials provided. * Create a customized sign in link such as "yourcompany.signin.aws.amazon.com/console" for your new users to use to sign in with. * Create an additional AWS root account for each new user. * Give your users the root account credentials so that they can also sign in.

*Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the credentials provided.* Create a customized sign in link such as "yourcompany.signin.aws.amazon.com/console" for your new users to use to sign in with.

*What are the different ways of making an EC2 server available to the public?* * Create it inside a public subnet * Create it inside a private subnet and assign a NAT device * Attach an IPv6 IP address * Allocate that with a load balancer and expose the load balancer to the public

*Create it inside a public subnet* If you create an EC2 instance in the public subnet, it is available from the Internet. Creating an instance inside a private subnet and attaching a NAT instance won't give access from the Internet. Attaching an IPv6 address can provide Internet accessibility provided it is a public IPv6 and not private. Giving load balance access to the public won't give the EC2 access to the public.

*You are creating a number of EBS Volumes for the EC2 Instances hosted in your company's AWS account. The company has asked you to ensure that the EBS volumes are available even in the event of a disaster. How would you accomplish this?* * Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway. * Create snapshots of the EBS Volumes. * Ensure the snapshots are made available in another availability zone. * Ensure the snapshots are made available in another region.

*Create snapshots of the EBS Volumes.* *Ensure the snapshots are made available in another region.* You can back up the data on Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are *incremental backups*, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and save on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume. ----------------------------------- Option A is incorrect since you have to make use of EBS snapshots. Option C is incorrect since the snapshots need to be made available in another region for disaster recovery purposes.

*You are running your MySQL database in RDS. The database is critical for you, and you can't afford to lose any data in the case of any kind of failure. What kind of architecture will you go with for RDS?* * Create the RDS across multiple regions using a cross-regional read replica * Create the RDS across multiple AZs in master standby mode * Create the RDS and create multiple read replicas in multiple AZs with the same region * Create a multimaster RDS database across multiple AZs

*Create the RDS across multiple AZs in master standby mode* If you use a cross-regional replica and a read replica within the same region, the data replication happens asynchronously, so there is a chance of data loss. Multimaster is not supported in RDS. By creating the master and standby architecture, the data replication happens synchronously, so there is zero data loss.

*What are the various ways of securing a database running in RDS?* (Choose two.) * Create the database in a private subnet * Encrypt the entire database * Create the database in multiple AZs * Change the IP address of the database every week

*Create the database in a private subnet. Encrypt the entire database* Creating the database in multiple AZs is going to provide high availability and has nothing to do with security. Changing the IP address every week will be a painful activity and still won't secure the database if you don't encrypt it.

*You are deploying a three-tier application and want to make sure the application is secured at all layers. What should you be doing to make sure it is taken care of?* * Create the web tier in a public subnet, and create the application and database tiers in the private subnet. Use HTTPS for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier and application tier in the public subnet, and create the database tier in the private subnet. Use HTTPS for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier in the public subnet, and create the application and database tiers in the private subnet. Use HTTP for all the communication to the web tier and encrypt the data at rest and in transit. * Create the web tier in the public subnet, and create the application and database tiers in a private subnet. Use HTTP for all the communication to the web tier. There is no need to encrypt the data since it is already running in AWS.

*Create the web tier in a public subnet, and create the application and database tiers in the private subnet.* In this scenario, you need to put the database and application servers in the private subnet and only the web tier in the public subnet. HTTPS will make sure the data comes to the web server securely. Encryption of the data at rest and in transit will make sure that you have end-to-end security in the overall system. ------------------------- B is incorrect because you are putting the application server in the public subnet, and C is incorrect because you are using HTTP instead of HTTPS. D is incorrect because you are not encrypting the data.

*You company currently has data hosted in an Amazon Aurora MySQL DB. Since this data is critical, there is a need to ensure that it can be made available to another region in case of a disaster. How can this be achieved?* * Make a copy of the underlying EBS Volumes in the Amazon Cluster in another region. * Enable Multi-AZ for the Aurora database. * Creating a read replica of Amazon Aurora in another region. * Create an EBS Snapshot of the underlying EBS Volumes in the Amazon Cluster and then copy them to another region.

*Creating a read replica of Amazon Aurora in another region.* Read replicas in Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments. You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into the region that is closer to your users, and make it easier to migrate from one region to another. *Note:* In the question, they clearly mentioned that *there is a need to ensure that it can be made available in another region in case of a disaster.* Sp Multi-AZ is not the regional-wide solution here.

*Your recent security review revealed a large spike in attempted logins to your AWS account with respect to sensitive data stored in encryption enabled S3. The data has not been encrypted and is susceptible to fraud if it were to be stolen. You've recommended AWS Key Management Service as a solution. Which of the following is true regarding how KMS operats?* * Only KMS generated keys can be used to encrypt or decrypt data. * Data is encrypted at rest. * KMS allows all users and roles use of the keys by default. * Data is decrypted in transit.

*Data is encrypted at rest.* Data is encrypted at rest; meaning data is encrypted once uploaded to S3. Encryption while in transit is handled by SSL or by using client-side encryption. ----------------------------------- Option A is incorrect, Data can be encrypted/decrypted using AWS keys or keys provided by your company. Option C is incorrect, Users are granted permissions explicitly, not by default by KMS. Option D is incorrect, Data is not decrypted in transit (while moving to and from S3). Data is encrypted or decrypted while in S3 and then while in transit can be encrypted using SSL.

*Which instance type runs on hardware allocated to a single customer?* * EC2 spot instance * Dedicated instance * Reserved instance * On-demand instance

*Dedicated instance* A dedicated instance runs on hardware allocated to a single customer. The dedicated instances are physically isolated at the host hardware level from instances that belon gto other AWS accounts. ------------------------- Spot, reserved, on-demand instances are differenct pricing models for EC2 instances.

*You have both production and development based instances running on your VPC. It is required to ensure that the people responsible for the development instances do not have access to work on production instances for better security. Which of the following would be the best way to accomplish this using policies?* * Launch the development and production instances in separate VPCs and use VPC Peering. * Create an IAM Policy with a condition that allows access to only instances which are used for production or development. * Launch the development and production instances in different Availability Zones and use the Multi-Factor Authentication. * Define the tags on the Development and production servers and add a condition to the IAM Policy which allows access to specific tags.

*Define the tags on the Development and production servers and add a condition to the IAM Policy which allows access to specific tags.* You can easily add tags to define which instances are production and which one are development instances. These tags can then be used while controlling access via an IAM Policy. *Note:* It can be done with the help of option B as well. However, the question is looking for the 'best way to accomplish these using policies'. By using the option D, you can reduce usage of different IAM policies on each instance.

*You are working as an AWS Architect for a start-up company. It has production website on AWS which is a two-tier with web servers in front end & database servers in back end. Third party firm has been looking after operations of these database servers. They need to access these database servers in private subnets on SSH port. As per standard operating procedure provided by Security team, all access to these servers should be over secure layer & should be logged. What will be the best solution to meet this requirement?* * Deploy Bastion hosts in Private Subnet * Deploy NAT Instance in Private Subnet * Deploy NAT Instance in Public Subnet * Deploy Bastion hosts in Public Subnet.

*Deploy Bastion hosts in Public Subnet* External user will be unable to access instance in private subnets directly. To provide such access, we need to deploy Bastion hosts in public subnets. In the case of above requirement, third-party users will initiate a connection to the Bastion hosts in public subnets & from there, they will access SSH connection to the database servers in private subnets. ----------------------------------- Option A is incorrect as Bastion hosts needs to be in Public subnets and not in Private subnets; as third-party users will be accessing these servers from the internet. Option B is incorrect as NAT instance are used to provide internet traffic to hosts in private subnets. Users from the internet will not be will not be able to do SSH connections to hosts in private subnets using NAT instance. NAT instance are always in Public subnets. Option C is incorrect as NAT instance are used to provide internet traffic to hosts in private subnets. Users from internet will not be able to do SSH connections to hosts in private subnets using NAT instance.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The web application interfaces with a AWS RDS database. It has been noticed that a set of similar types of queries is causing a performance bottleneck at the database layer. Which of the following architecture additions can help alleviate this issue?* * Deploy ElastiCache in front of the web servers. * Deploy ElastiCache in front of the database servers. * Deploy Elastic Load balancer in front of the web servers. * Enable Multi-AZ for the database.

*Deploy ElastiCache in front of the database servers.* Amazon Elasticache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. ----------------------------------- Option A is incorrect since the database is having issue hence you need to ensure that ElastiCache is placed in front of the database servers. Option C is incorrect since there is an issue with the database servers, so we don't need to add anything for the web servers. Option D is incorrect since this is used for high availability of the database.

*You have a business-critical two-tier web application currently deployed in 2 Availability Zones in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes offline and if the Auto Scaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this?* * Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 50% peak load per region. * Deploy in 3 AZ with Auto Scaling minimum set to handle 33 percent peak load per zone. * Deploy in 3 AZ with Auto Scaling minimum set to handle 50 percent peak load per zone. * Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 100% peak load per region.

*Deploy in 3 AZ with Auto Scaling minimum set to handle 50 percent peak load per zone.* Since the requirement states that the application should never go down even if an AZ is not available, we need to maintain 100% availability. ----------------------------------- Option A and D are incorrect because region deployment is not possible for ELB. ELBs can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%. *Note:* In the question, it clearly mentioned that 'The application needs to remain fully available even if one application AZ goes offline and if Auto Scaling cannot launch new instances in the remaining AZ.' Hence you need to maintain 100% availability. In Option B, when you create 3 AZ, with minimum of 33% load on each, if any failure occurs then in one AZ then 33% + 33% = 66%. Here you can only handle 66% and remaining 34% of load not handling. But when you select option C, when you create 3 AZs with a minimum of 50% load on each. If any failure occurs in one AZ then 50% + 50% = 100%. Here you can handle full load i.e. 100%

*One of your users is trying to upload a 7.5GB file to S3. However, they keep getting the following error message: "Your proposed upload exceeds the maximum allowed object size.". What solution to this problem does AWS recommend?* * Design your application to use the Multipart Upload API for all objects. * Raise a ticket with AWS to increase your maximum object size. * Design your application to use large object upload API for this object. * Log in to the S3 console, click on the bucket and then click properties. You can then increase your maximum object size to 1TB.

*Design your application to use the Multipart Upload API for all objects.*

*You have created a VPC with the CIDR block 10.0.0.0/16 and have created a public subnet and private subnet, 10.0.0.0/24 and 10.0.0.0/24, respectively, within it. Which entries should be present in the main route table to allow the instances in VPC to communicate with each other?* * Destination: 10.0.0.0/0 and Target ALL * Destination: 10.0.0.0/16 and Target ALL * Destination: 10.0.0.0/24 and Target VPC * Destination: 10.0.0.0/16 and Target Local

*Destination: 10.0.0.0/16 and target Local* By specifying Target Local, you can allow the instances in VPC to communicate to each other. ----------------------------------- Target ALL and Target VPC are not valid options.

*A global media firm is using AWS CodePipeline as an automation service for releasing new features to customers. All the codes are uploaded in Amazon S3 bucket. Changes in files stored in Amazon S3 bucket should trigger AWS Codepipeline which will further initiate AWS Elastic Beanstalk for deploying additional resources. Which of the following is additional requirement which needs to be configured to trigger CodePipeline in a faster way?* * Enable periodic checks & create a Webhook which triggers pipeline once S3 bucket is updated. * Disable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail. * Enable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail trail. * Disable periodic checks & create a Webhook which triggers pipeline once S3 bucket is updated.

*Disable periodic checks & create an Amazon CloudWatch Events rule & AWS CloudTrail trail.* To automatically trigger pipeline with changes in source S3 bucket. Amazon CloudWatch Events rule & AWS CloudTrail trail must be applied. When there is a change in S3 bucket, events are filtered using AWS CloudTrail & then Amazon CloudWatch events are used to trigger start of pipeline. This default method is faster & periodic checks should be disabled to have events-based triggering of CodePipeline. ----------------------------------- Option A is incorrect as Webhooks are used to trigger piepline when source is GitHub repository. Also with periodic check will be a slower process to trigger CodePipeline. Option C is incorrect as Periodic checks are not a faster way to trigger CodePipeline. Option D is incorrect as Webhooks are used to trigger pipeline when source is GitHub repository.

*Which statement best describes Availability Zones?* * Two zones containing compute resources that are designed to automatically maintain synchronized copies of each other's data. * Distinct locations from within an AWS region that are engineered to be isolated from failures. * A Content Distribution Network used to distribute content to users. * Restricted areas designed specifically for the creation of Virtual Private Clouds.

*Distinct locations from within an AWS region that are engineered to be isolated from failures.* An Availability Zone (AZ) location within an AWS Region. ----------------------------------- Each Region comprises at least two AZs.

*A team is planning to host data on the AWS Cloud. Following are the key requirements; a) Ability to store JSON documents b) High availability and durability Select the ideal storage mechanism that should be employed to fit this requirement.* * Amazon EFS * Amazon Redshift * DynamoDB * AWS CloudFormation

*Dyanmo DB* Amazon DynamoDBi s a fully managed NoSQL database service that provides fast and predicatble performance with seamless scalability. The data in DyanmoDB is stored in JSON format and hence is the perfect data store for the requirement in question. DynamoDBMapper has a new feature that allows you to save an object as JSON document in a DynamoDB attribute. The mapper does the heave work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the Java object from the JSON document when requested by the user.

*A team is building an application that must persist and index JSON data in a highly available data store. Latency of data access must remain consistent despite very high application traffic. What service should the team choose for the above requirement?* * Amazon EFS * Amazon Redshift * DynamoDB * AWS CloudFormation

*DynamoDB* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The data in DynamoDB is stored in JSON format, and hence is the perfect data store for the requirement in question.

*A company planning on building and deploying a web application on AWS, need to have a data store to store session data. Which of the below services can be used to meet this requirement?* (Choose 2) * AWS RDS * AWS SQS * DynamoDB * AWS ElastiCache

*DynamoDB* *AWS ElastiCache* Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps. Consider only storing a unique session identifier in an HTTP cookie and storing more detailed user session information on the server side. Most programming platforms provide a native session management mechanism that works this way. However, user session information is often stored on the local file system by default and results in a stateful architecture. A common solution to this problem is to store this information in a database. Amazon DynamoDB is a great choice because of its scalability, high availability, and durability characteristics. For many platforms, there are open source drop-in replacement libraries that allow you to store native sessions in Amazon DynamoDB. *Note:* In order to address scalability and to provide a shared data storage sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store as Redis and Memcached. *In-memory caching improves application performance by storing frequently accessed data items in memory, so that they can be retrieved without access to the primary data store.* Properly leveraging caching can result in an application that not only performs better, but also costs less at scale. Amazon ElastiCache is a managed service that reduces the administrative burden of deploying an in-memory cache in the cloud. ----------------------------------- Option A is incorrect, RDS is is distributed relational database. It is a web service running "in the cloud" designed to simplify the setup, operation, and scaling of a relational database for use in applications. Option B is incorrect, SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications.

*A company is developing a web application to be hosted in AWS. This application needs a data store for session data. As an AWS Solution Architect, which of the following would you recommend as an ideal option to store session data?* (Choose 2) * CloudWatch * DynamoDB * Elastic Load Balancing * ElastiCache * Storage Gateway

*DynamoDB* *ElastiCache* Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of the throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. ----------------------------------- AWS CloudWatch offers cloud monitoring services for customers of AWS resources. AWS Storage Gateway is a hybrid storage service that enables your on-premise applications to seamlessly use AWS cloud storage. AWS Elastic Load Balancing automatically distributes incoming application traffic across multiple targets.

*Your company is planning on hosting an e-commerce application on the AWS Cloud. There is a requirement for sessions to be always maintained for users. Which of the following can be used for storing session data?* (Choose 2) * CloudWatch * DynamoDB * Elastic Load Balancing * ElastiCache * Storage Gateway

*DynamoDB* *ElastiCache* DynamoDB and ElastiCache are perfect options for storing session data. Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution while removing the complexity associated with deploying and managing a distributed cache environment.

*You are deploying an order management application in AWS, and for that you want to leverage Auto Scaling along with EC2 servers. You don't want to store the user session information on the EC2 servers because if even one use is connected to the application, you won't be able to scale down. What services can you use to store the session information?* (Choose 2.) * S3 * Lambda * DynamoDB * ElastiCache

*DynamoDB* *ElastiCache* You can store the session information in either DynamoDB or ElastiCache. DynamoDB is NoSQL database that provide single-digit millisecond latency, and using ElastiCache you can deploy, operate, and scale popular open source in-memory data stores. Both are ideal candidates for storing user session information. ----------------------------------- S3 is an object store, and Lambda lets you run code without provisioning or managing server.

*An application is currently hosted on an EC2 Instance which has attached EBS Volumes. The data on these volumes are accessed for a week and after that, the documents need to be moved to infrequent access storage. Which of the following EBS volume type provide cost efficiency for the moved documents?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Cold HDD* Cold HDD (sc1) volume provide low-cost magnetic storage that defines performance in terms of thoughput rather than IOPS. With a lower throughput limit than st1, sc1 is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage.

*There is a requirement to host a database on an EC Instance. It is also required that the EBS volume should support 18,000 IOPS. Which Amazon EBS volume type meets the performance requirements of this database?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Provisioned IOPS SSD* For high performance and high IOPS requirements as in this case, the ideal choice would be to opt for EBS Provisioned IOPS SSD.

*A company has decided to host a MongoDB database on an EC2 Instance. There is an expectancy of a large number of reads and writes on the database. Which of the following EBS storage types would be the ideal one to implement for the database?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Provisioned IOPS SSD* Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD.

*There is a requirement to host a database application having resource-intensive reads and writes. Which of the below options is most suitable?* * EBS Provisioned IOPS SSD * EBS SSD * EBS Throughput Optimized HDD * EBS Cold Storage

*EBS Provisioned IOPS SSD* Since there is a requirement for high performance with high IOPS, one needs to opt for EBS Provisioned IOPS SSD.

*An application requires an EC2 Instance for continuous batch processing activities requiring a maximum data throughput of 500MiB/s. Which of the following is the best storage option for this?* * EBS IOPS * EBS SSD * EBS Throughput Optimized HDD * EBS Cold Storage

*EBS Throughput Optimized HDD*

*Which of the following statements are true?* * EBS Volumes cannot be attached to an EC2 instance in another AZ. * EBS Volumes can be attached to an EC2 instance in another AZ. * EBS Volumes can be attached to multiple instance simultaneously. * EBS Volumes are ephemeral.

*EBS Volumes are ephemeral.* The only true statement is, 'EBS Volumes cannot attached to an EC2 instance in another AZ.' The rest are false.

*Which of the following statements are true for an EBS volume?* (Choose two.) * EBS replicates within its availability zones to protect your applications from component failure. * EBS replicates across different availability zones to proect your applications from component failures. * EBS replicates across different regions to proect your applications from component failure. * Amazon EBS volumes provide 99.999 percent availability.

*EBS replicates within its availability zones to protect your applications from component failure.* *Amazon EBS volumes provide 99.999 percent availability. EBS replicate within their availability zones to protect your applications from component failure. It provides 99.999 percent availability. ------------------------- EBS cannot replicate to a different AZ or region.

*You are planning to run a database on an EC2 instance. You know that the database is pretty heavy on I/O. The DBA told you that you would need a minimum of 8,000 IOPS. What is the storage option you should choose?* * EBS volume with magnetic hard drive * Store all the data files in the ephemeral storage of the server * EBS volume with provisioned IOPS * EBS volume with general-purpose SSD

*EBS volume with provisioned IOPS* The magnetic hard drive won't give you the IOPS number you are looking for. You should not put the data files in the ephemeral drives because as soon as the server goes down, you will lose all the data. For a database, data is the most critical component, and you can't afford to lose that. The provisioned IOPS will give you the desired IOPS that your database needs. You can also run the database with general-purpose SSD, but there is no guarantee that you will always get the 8,000 IOPS number that you are looking for. Only PIOPS will provide you with that capacity.

*Which of the below are compute service from AWS?* * EC2 * S3 * Lambda * VPC

*EC2* *Lambda* Both Lambda and EC2 offer computing in the cloud. ----------------------------------- S3 is a storage offering while VPC is a network service.

*Which of the following statements are true about containers on AWS?* (Choose 5) * ECR can be used to store Docker images. * You must use ECS to manage running Docker containers in AWS. * To use private images in ECS, you must refer to Docker images from ECR. * You can use the ECS Agent without needing to use ECS. * To be able to use ECS, you must use the ECS Agent. * You can install and manage Kubernetes on AWS, yourself. * ECS allows you to control the scheduling and placement of your containers and tasks. * You can have AWS manage Kubernetes for you.

*ECR can be used to store Docker images.* *To be able to use ECS, you must use the ECS Agent.* *You can install and manage Kubernetes on AWS, yourself.* *ECS allows you to control the scheduling and placement of your containers and tasks.* *You can have AWS manage Kubernetes for you.* You definitely can install Kubernetes, yourself. ECR is the EC2 Container Registry. ECS doesn't work without the ECS agent. You can customize a lot of things in ECS

*You are planning to deploy a web application via Nginx. You don't want the overhead of managing the infrastructure beneath. What AWS service should you choose for this?* * EC2 Server * RDS * Elastic Beanstalk * AWS OpsWork

*Elastic Beanstalk* Elastic Beanstalk is easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. ----------------------------------- If you use an EC2 server, you have to manage the server yourself. RDS is the relational database offering of Amazon. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

*A company is building a Two-Tier web application to serve dynamic transaction-based content. The Data Tier uses an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalble Web Tier?* * Elastic Load Balancing, Amazon EC2, and Auto Scaling. * Elastic Load Balancing, Amazon RDS, with Multi-AZ, and Amazon S3. * Amazon RDS with Multi-AZ and Auto Scaling. * Amazon EC2, Amazon Dynamo DB, and Amazon S3.

*Elastic Load Balancing, Amazon EC2, and Auto Scaling.* The question mentions a scalable Web Tier and not a Database Tier. So, Option C, D, and B can be elminated since they are database related options.

*Your company uses KMS to fully manage the master keys and perform encryption and decryption operations on your data and in your applications. At an additional level of security, you now recommend AWS rotate your keys. What is your company's reponsibility after enabling this additional feature?* * Enable AWS KMS to rotate keys and KMS will manage all encrypt/decrypt actions using the appropriate keys. * Your company must instruct KMS to re-encrypt all data in all services each time a new key is created. * You have 30 days to delete old keys after a new one is rotated in. * Your company must create its own keys and import to them to KMS to enable key rotation.

*Enable AWS KMS to rotate keys and KMS will manage all encrypt/decrypt actions using the appropriate keys.* KMS will rotate keys anually and use the appropriate keys to perform cryptographic oprations. ----------------------------------- Option B is incorrect, this is not necessary as KMS is a managed service, and will keep old keys and perform operatiosn based on the appropriate key. Option C & D is incorrect, and not a requirement of KMS.

*A retailer exports data daily from its transactional databases into an S3 bucket in the Sydney region. The retailer's Data Warehousing team wants to import this data into an existing Amazon Redshift cluster in their VPC at Sydney. Corporate security policy mandates that data can only be transported within a VPC. What combination of the following steps will satisfy the security policy?* (Choose 2) * Enable Amazon Redshift Enhanced VPC Routing. * Create a Cluster Security Group to allow the Amazon Redshift cluster to access Amazon S3. * Create a NAT gateway in a public subnet to allow the Amazon Redshift cluster to access Amazon S3. * Create and configure an Amazon S3 VPC endpoint.

*Enable Amazon Redshift Enhanced VPC Routing.* *Create and configure an Amazon S3 VPC endpoint.* Amazon Redshift Enhanced VPC Routing provides VPC resources the access to Redshift. Redshift will not be able to access the S3 VPC endpoints without enabling Enhanced VPC routing, so one option is not going to support the scenario if another is not selected. NAT instance (the proposed answer) cannot be reached by Redshift without enabling Enhanced VPC Routing. Option D VPC Endpoints - it enables you to privately connect your VPC to supported AWS Services and VPC Endpoint services powered by PrivateLink without requiring an IGW. S3 VPC Endpoint - is a feature that will allow you to make even better use of VPC and S3.

*A company currently hosts a Redshift cluster in AWS. For security reasons, it should be ensured that all traffic from and to the Redshift cluster does not go through the Internet. Which of the following features can be used to fulfill this requirement in an efficient manner?* * Enable Amazon Redshift Enhanced VPC Routing. * Create a NAT Gateway to route the traffic. * Create a NAT Instance to route the traffic. * Create the VPN Connection to ensure traffic does not flow through the Internet.

*Enable Amazon Redshift Enhanced VPC Routing.* When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffice between your cluster and your data repositiories through your Amazon VPC. If Enhanced VPC Routing is not enabled, Amazon Redhisft routes traffic through the Internet, including traffic to other services within the AWS network.

*Your company currently has a web distribution hosted using the AWS CloudFront service. The IT Security department has confirmed that the application using this web distribution now falls under the scope of the PCI compliance. What are the possible ways to meet the requirement?* (Choose 2) * Enable CloudFront access logs. * Enable Cache in CloudFront. * Capture requests that are sent to the CloudFront API. * Enable VPC Flow Logs

*Enable CloudFront access logs.* *Capture requests that are sent to the CloudFront API* If you run PCI or HIPAA-compliant workloads based on the AWS Shared Responsibility Model, we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. ----------------------------------- Option B helps to reduce latency. Option D is incorrect, VPC flow logs caputre information about the IP traffic going to and from network interfaces in the VPC but not for CloudFront.

*A redshift cluster currently contains 60 TB of data. There is a requirement that a disaster recovery site is put in place in a region located 600km away. Which of the following solutions would help ensure that this requirement is fulfilled?* * Take a copy of the underlying EBS volumes to S3, and then do Cross-Region Replication. *Enable Cross-Region snapshots for the Redshift Cluster. *Create a CloudFormation template to restore the Cluster in another region. * Enable Cross Availability Zone snapshots for the Redshift Cluster.

*Enable Cross-Region snapshots for the Redshift Cluster.* You can configure cross-regional snapshots when you want Amazon Redshift to automatically copy snapshots (automated or manual) to another region for backup purposes.

*Your company has a set of resources defined in AWS. These resources consists of applications hosted on EC2 Instances. Data is stored on EBS volumes and S3. They company mandates that all data should be encrypted at rest. How can you achieve this?* (Choose 2) * Enable SSL with the underlying EBS volumes * Enable EBS Encryption * Make sure that data is transmitted from S3 via HTTPS * Enable S3 server-side encryption

*Enable EBS Encryption* *Enable S3 server-side Encryption* Amazon EBS encryption offers a simple encryption solution for your EBS volumes without the need to build, maintain, and secure your own key management infrastructure. Server-side encryption protects data at rest. Server-side encryption with Amazon S3-managed encryption key (SSE-S3) uses strong multi-factor encryption. ----------------------------------- Option A and C are incorrect since these have to do with encryption of data in transit and not encryption of data at rest.

*A company hosts data in S3. There is now a mandate that going forward, all data in S3 bucket needs to be encrypted at rest, how can this be achieved?* * Use AWS Access Keys to encrypt the data. * Use SSL Certificates to encrypt the data. * Enable Server-side encryption on the S3 bucket. * Enable MFA on the S3 bucket.

*Enable Server-side encryption on the S3 bucket.* Server-side encryption is about data encryption at rest-that is. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

*A company currently storing a set of documents in the AWS Simple Storage Service, is worried about the potential loss if these documents are ever deleted. Which of the following can be used to ensure protection from loss of the underlying documents in S3?* * Enable Versioning for the underling S3 bucket. * Copy the bucket data to an EBS Volume as a backup. * Create a Snapshot of the S3 bucket. * Enable an IAM Policy which does not allow deletion of any document from the S3 bucket.

*Enable Versioning for the underlying S3 bucket.* *Enable an IAM Policy which does not allow deletion of any document from the S3 bucket.* Versioning is on the bucket level and can be used to recover prior versions of an object. We can also avoid 'deletion' of objects from S3 bucket by writing IAm policy.

*You want to launch a copy of a Redshift cluster to a different region. What is the easiest way to do this?* * Create a cluster manually in a different region and load all the data * Extend the existing cluster to a different region * Use third-party software like Golden Gate to replicate the data * Enable a cross-region snapshot and restore the database from the snapshot to a different region

*Enable a cross-region snapshot and restore the database from the snapshot to a different region.* Loading the data manually will be too much work. You can't extend the cluster to a different region. A Redshift cluster is specific to a particular AZ. It can't go beyond an AZ as of writing this book. Using Golden Gate is going to cost a lot, and there is no need for it when there is an easy solution available.

*What is the best way to protect a file in Amazon S3 against accidental delete?* * Upload the files in multiple buckets so that you can restore from another when a file is deleted * Back up the files regularly to a different bucket or in a different region * Enable versioning on the S3 bucket * Use MFA for deletion * Use cross-region replication

*Enable versioning on the S3 bucket* You can definitely upload the file in multiple buckets, but the cost will increase the number of times you are going to store the files. Also, now you need to manage three or four times more files. What about mapping files to applications? This does not make sense. Backing up files regularly to a different bucket can help you to restore the file to some extent. What if you have uploaded a new file just after taking the backup? The correct answer is versioning since enabling versioning maintains all the versions of the file and you can restore from any version even if you have deleted the file. You can definitely use MFA for delete, but what if even with MFA you delete a wrong file? With CRR, if a DELETE request is made without specifying an object version ID, Amazon S3 adds a delete marker, which cross-region replication replicates to the destination bucket. If a DELETE request specifies a particular object version ID to delete, Amazon S3 deletes that object version in the source bucket, but it does not replicate the deletion in the destination bucket.

*You run a security company which stores highly sensitive PDFs on S3 with versioning enabled. To ensure MAXIMUM protection of your objects to protect against accidental deletion, what further security measure should you consider using?* * Setup a CloudWatch alarm so that if an object in S3 is deleted, an alarm will send an SNS notification to your phone. * Use server-side encryption with S3 - KMS. * Enable versioning with MFA Delete on your S3 bucket. * Configure the application to use SSL endpoints using the HTTPS protocol.

*Enable versioning with MFA Delete on your S3 bucket.* If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession.

*A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The web application interfaces with a AWS RDS database. The management has specified that the database needs to be available in case of a hardware on the primary database. The secondary needs to be made available in the least amount of time. Which of the following would you opt for?* * Made a snapshot of the database * Enabled Multi-AZ failover * Increased the database instance size * Create a read replica

*Enabled Multi-AZ failover.* Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. ----------------------------------- Options A and D are incorrect since even though they can be used to recover a database, it would just take more time than just enabling Multi-AZ. Option C is incorrect since this will not help the cause.

*You have created a new AWS account for your company, and you have also configured multi-factor authentication on the root account. You are about to create your new users. What strategy should you consider in order to ensure that there is good security on this account.* *Restrict login to the corporate network only. * Give all users the same password so that if they forget their password they can just ask their co-workers. * Require users to only be able to log in using biometric authentication. * Enact a strong password policy: user passwords must be changed every 45 days, with each password containing a combination of capital letters, lower case letters, numbers, and special symbols.

*Enact a strong password policy: user passwords must be changed every 45 days, with each password containing a combination of capital letters, lower case letters, numbers, and special symbols.*

*You have created an AWS Lambda function that will write data to a DynamoDB table. Which of the following must be in place to ensure that Lambda function can interact with DynamoDB table?* * Ensure IAM Role is attached to the Lambda function which has the required DynamoDB privileges. * Ensure an IAM User is attached to the Lambda function which has the required DynamoDB privileges. * Ensure the Access keys are embedded in the AWS Lambda function. * Ensure the IAm user password is embedded in the AWS Lambda function.

*Ensure an IAM Role is attached to the Lambda function which has the required DynamoDB prvileges.* Each Lambda function has an IAM role (execution role) associated with it. You specify the IAM role when you create your Lambda function. Permissions you grant to this role determine what AWS Lambda can do when it assumes the role. There are two types of permissions that you grant to the IAM role: 1) If your Lambda function code accesses other AWS resources, such as read an object from an S3 bucket or write logs to CloudWatch logs, you need to grant permissions for relevant Amazon S3 and CloudWatch actions to the role. 2) If the event source is stream-based (Amazon Kinesis Data Streams and DynamoDB streams), AWS Lambda polls these streams on your behalf. AWS Lambda needs permission to poll the stream and read new records on the streams so you need to grant relevant permission to this role.

*A company has resources hosted in their AWS Account. There is a requirement to monitor API activity for all regions and the audit needs to be applied for future regions as well. Which of the following can be used to fulfill this requirement?* * Ensure CloudTrail for each region, then enable for each future region. * Ensure one CloudTrail trail is enabled for all regions. * Create a CloudTrail for each region. Use CloudFormation to enable the trail for all future regions. * Create a CloudTrail for each region. Use AWS Config to enable the trail for all future regions.

*Ensure one CloudTrail trail is enabled for all regions.* You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail without taking any action.

*A company has setup some EC2 Instances in a VPC with the default Security group and NACL settings. They want to ensure that IT admin staff can connect to the EC2 Instance via SSH. As an architect what would you ask to the IT admin team to do to ensure that they can connect to the EC2 Instance from Internet.* (Choose 2) * Ensure that the Instance has a Public or Elastic IP * Ensure that the Instance has a Private IP * Ensure the modify the Security groups * Ensure to modify the NACL rule

*Ensure that the Instance has a Public or Elastic IP* *Ensure to modify the Security groups* To enable access to or from the internet for instances in a VPC subnet, you must do the following. - Attach an internet gateway to your VPC - Ensure that your subnet's route table points to the internet gateway - Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address) - Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance. ----------------------------------- Option B is incorrect since the Private IP will always be created, and would not be used to connect from the internet. Option D is incorrect since the default NACL rules will allow all traffic.

*Your company is planning on hosting a set of EC2 Instances in AWS. The Instances would be divided into subnets, one for the web tier and the other for the database tier. Which of the following would be needed to ensure that traffic can flow between the Instances in each subnet.* * Ensure that the route table have the desired routing between the subnets. * Ensure that the Security Groups have the required rules defined to allow traffic. * Ensure that all instances have a public IP for communication. * Ensure that all subnets are defined as public subnets.

*Ensure that the Security Groups have the required rules defined to allow traffic.* A security group acts as a virtual firewall for your instance in a VPC. You can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. ----------------------------------- Option A is invalid since the route tables would already have the required rules to route traffic between subnets in a VPC. Option C is invalid since the instances would communicate with each other on the private IP. Option D is invalid since the database should be in the private subnet and not the public subnet.

*A customer planning on hosting an AWS RDS instance, needs to ensure that the underyling data is encrypted. How can this be achieved?* (Choose 2) * Ensure that the right instance class is chosen for the underlying instances. * Choose only the General SSD. Because only this type supports encryption of data. * Encrypt the database during creation. * Enable encryption of the underlying EBS Volume.

*Ensure that the right instances class is chosen for the underlying instance.* *Encrypt the database during creation.* Encryption for the database can be done during the creation of the database. Also, you need to ensure that the underlying instance type supports DB encryption. Encryption at Rest is not available for DB istances running SQL Server Express Edition.

*You have an application hosted on AWS consisting of EC2 Instances launched via an Auto Scaling Group. You notice that the EC2 Instances are not scaling out on demand. What checks can be done to ensure that the scaling occurs as expected?* * Ensure the right metrics are being used to trigger the scale out. * Ensure that ELB health checks are being used. * Ensure that the instances are placed across multiple Availability Zones. * Ensure that the instances are placed across multiple regions.

*Ensure that the right metrics are being used to trigger the scale out.* If your scaling events are not based on the right metrics and do not have the right threshold defined, then the scaling will not occur as you want it to happen.

*A customer has an instance hosted in the AWS Public Cloud. The VPC and subnet used to host the instance have been created with the default settings for the Network Access Control LIsts. An IT Administrator needs to be provided secure access to the underlying instance. How can this be accomplished?* * Ensure the Network Access Control List allow Inbound SSH traffic from the IT Adminstrator's Workstation. * Ensure the Network Access Control Lists allow Outbound SSH traffic from the IT Administrator's Workstation. * Ensure that the security group allows inbound SSH traffic from the IT Administrator's Workstation. * Ensure that the security group allows Outbound SSH traffic from the IT Administrator's Workstation.

*Ensure that the security group allows Inbound SSH traffic from the IT Administrator's Workstation.* Since Security groups are stateful, we do not have to configure outbound traffic. What enters the inbound traffic is allowed in the outbound traffic too. *Note:* The default network ACL is configured to allow all traffic to flow in and out of the subnets to which it is associated. Since the question does not mention that it is a custom VPC we would assume it to be the default one. Since the IT administrator need to be provided SSH access to the instance. The traffic would be inbound to the instance. Security group being stateful means that the return response to the allowed inbound request will be allowed vice-versa. Allowing the outbound traffic would mean that the instance would SSH into the IT admin's server and this server will send the response to the instance but it does not mean that IT admin would also be able to SSH into instance. SSH does not work like that. To allow SSH, you need to allow inbound SSH access over port 22.

*A company currently uses Redshift in AWS. The Redshift cluster is required to be used in a cost-effective manner. As an architecte, which of the following would you consider to ensure cost-effectiveness?* * Use Spot Instances for the underlying nodes in the cluster. * Ensure the unnecessary manual snapshots of the cluster are deleted. * Ensure VPC Enhanced Routing is enabled. * Ensure that CloudWatch metrics are disabled.

*Ensure that unnecessary manual snapshots of the cluster are deleted.* Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need. *Note:* Redshift pricing is based on the following elements. Compute Node hours Backup Storage Data Transfer - There is no data transfer charge for data transferred to or from Amazon Redshift and Amazon S3 within the Same AWS Region. For all other data transfers into and out of Amazon Redshift, you will be billed at standard AWS data transfer rates. Data scanned. ----------------------------------- There is no addittional charge for using Enhanced VPC Routing. *You might incur additional data transfer charges for certain operations, such as UNLOAD to Amazon S3 in a different region or COPY from Amazon EMR or SSH with public IP addresses.* Enhanced VPC routing does not incur any cost but any Unload operation to a different region will incur a cost. With Enhanced VPC routing or without it any data transfer to be a different region does incur the cost. But with Storage, increasing your backup retention period or taking additional snapshots increases the backup storage consumed by your data warehouse. There is no aditional chrage for backup storage up to 100% of your provisioned storage for an active data warehouse cluster. Any amount of storage exceeding this limit does incur the cost. *For Redshift spot Instances is not an option* Amazon Redshift pricing options include: On-Demand pricing: no upfront costs - you simply pay an hourly rate based on the type of number of nodes in your cluster. Amazon Redshift Spectrum pricing: enables you to trun SQL queries directly against all of your data, out to exabytes, in Amazon S3 - you simply pay for the number of bytes scanned. Reserved Instance pricing: enables you to save up to 75% over On-Demand rates by committing to using Redshift for a 1 or 3-year term.

*A company has a set of EC2 instances hosted on the AWS Cloud. These instances form a web server farm which services a web application accessed by users on the Internet. Which of the following would help make this architecture more fault tolerant?* (Choose 2) * Ensure the instances are placed in separate Availability Zones. * Ensure the instances are placed in separate regions. * Use an AWS Load Balancer to distribute the traffic. * Use Auto Scaling to distribute the traffic.

*Ensure the instances are placed in separate Availability Zones.* *Use an AWS Load Balancer to distribute the traffic.* A load balancer distributes incoming application traffic across multiple EC2 Instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. *Note:* Autoscaling will not create an ELB automatically, you need to manually create it in the same region as the AutoScaling group. Once you create an ELB, and attach the load balancer to the autoscaling group, it automatically registers the instances in the group and distributes incoming traffic across the instances. You can automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down. As the Auto Scaling group adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across all of your EC2 instances. *The Elastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances.* Your load balancer acts as a single point of contact for all incoming traffic to the instances in your Auto Scaling group. *To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group.*

*A VPC has been setup with a subnet and an internet gateway. The EC2 instance is setup with a public IP but you are still not able to connect to it via the Internet. The right security groups are also in place. What should you do to connect to the EC2 Instance from the Internet?* * Set an Elastic IP Address to the EC2 Instance. * Set a Secondary Private IP Address to the EC2 Instance. * Ensure the right route entry is there in the route table. * There must be some issue in the EC2 Instance. Check the system logs.

*Ensure the right route entry is there in the Route table.* You have to ensure that the Route table has an entry to the Internet Gateway because this is required for instances to communicate over the Internet. ----------------------------------- Option A is incorrect, Since you already have a public IP assigned to the instance, this should have been enough to connect to the Internet. Option B is incorrect, Private IPs cannot be accessed from the Internet. Option D is incorrect, the route table is causing the issue and not the system.

*A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database should only be accessible to the web servers in a public subnet. As an architect, which of the following would you not recommend for such an architecture?* * Create a separate web server and database server security group. * Ensure the web server security group allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers. * Ensure the DB server security group allows MySQL port 3306 inbound and specify the source as the web server security group.

*Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.* The question is describing a scenario where it has been instructed that the database servers should only be accessible to web servers in the public subnet. You have been asked which of the following is *not* a recommended architecture based on the scenario. Option C is correct, as we are allowing all the incomign traffic from the internet to the database port which is not acceptable as per the architecture. A simliar setup is given in AWS Documentaiton. 1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443. 2) You need to then ensure that traffic can flow from the database server to the web server via the database security group. The requirement in the question states that the database servers should only be accessible to the web servers in the public subnet. ----------------------------------- In Option D, database server's security group allows inbound at port 3306 and source of the traffic as Webserver security roup that means request traffic from web server is allowed to the DB server since security groups are stateful response will allowed from DB to the webserver. Thus allowing the communication between them so the Option D is right, but wrong in terms of this question as you have to choose an incorrect / wrong option.

*Your company has a set of VPC's. There is now a requirement to establish communication across the Instances in the VPC's. Your supervisor has asked you to implement the VPC peering connection. Which of the following considerations would you keep in mind for VPC peering.* (Choose 2) * Ensuring that the VPC's don't have overlapping CIDR blocks. * Ensuring that no on-premises communication is required via transitive routing * Ensuring that the VPC's only have public subnets for communication * Ensuring that the VPC's are created in the same region

*Ensuring that the VPC's don't have overlapping CIDR blocks.* *Ensuring that no on-premises communication is required via transitive routing* You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks. ----------------------------------- Option C is incorrect since it is not necessary that VPC's only contain public subnets. Option D is incorrect since it is not necessary that VPC's are created in the same region. *Note:* AWS now supports VPC Peering across different regions.

*You are developing an application, and you have associated an EIP with the application tier, which is an EC2 instance. Since you are in the development cycle, you have to frequently stop and start the application server. What is going to happen to the EIP when you start/stop the application server?* * Every time the EC2 instance is stopped, the EIP is de-associated when you start it. * Every time the EC2 instance is stopped, the EIP is de-associated, and you must manually attach it whenever it is started again. * Even after the shutdown, the EIP remains associated with the instance, so no action is needed. * After shutting down the EC2 instance, the EIP is released from your account, and you have to re-request it before you can use it.

*Even after the shutdown, the EIP remains associated with the instance, so no action is needed.* Even if you shut down the instance, the EIP remains allocated with the instance. ----------------------------------- A, B, and D are incorrect. When you terminate an instance the EIP goes back to the pool.

*A company has decided to use Amazon Glacier to store all of their archived documents. The management has now issued an update that documents stored in Glacier need to be accessed within a time span of 20 minutes for an IT audit requirement. Which of the following would allow for documents stored in Amazon Glacier to be accessed within the required time frame after the retrieval request?* * Vault Lock * Expedited retrieval * Bulk retrieval * Standard retrieval

*Expedited retrieval* Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

*A Solutions Architect designing a solution to store and archive corporate documents, has determined Amazon Glacier as the right choice of solution. An important requirement is that the data must be delivered within 10 minutes of a retrieval request. Which feature in Amazon Glacier can help meet this requirement?* * Vault Lock * Expedited retrieval * Bulk retrieval * Standard retrieval

*Expedited retrieval* Expedited retrievals to access data in 1 - 5 minutes for a flat rate of $0.03 per GB retrieved. Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. ----------------------------------- The other two are standard (3-5 hours retrieval time) and Bulk retrievals which is the cheapest option (5-12 hours retrieval time).

*You are in the process of designing an archive document solution for your company. The solution must be cost-effective; therefore, you have selected Glacier. The business wants to have the ability to get a document within 15 minutes of a request. Which feature of Amazon Glacier will you choose?* * Expediated retrieval. * Standard retrieval. * Glacier is not the correct solution; you need Amazon S3. * Bulk retrieval.

*Expedited retrieval* Since you are looking for an archival and cost-effective solution, Amazon Glacier is the right choice. By using the expedited retrieval option, you should be able to get a document within five minutes, which meets your business objective. ------------------------- Standard bulk retrieval both take much longer; therefore, you won't be able to meet the business objective of five minutes. If you choose Amazon S3, the cost will go up.

*Which of the following Route 53 policies allow you to* *a) route to a second resource if the first is unhealthy* *b) route data to resources that have better performance?* * Geoproximity Routing and Geolocation Routing * Geolocation Routing and Latency-based Routing * Failover Routing and Simple Routing * Failover Routing and Latency-based Routing

*Failover Routing and Latency-based Routing* Failover ROuting and Latency-based Routing are the only two correct options, as they consider routing data based on whether the resource is unhealthy or whether one set of resources is more performant than another. Any answer containing location based routing (Geoproximity and Geolocation) cannot be correct in this case, as these types only consider where the client or resources are located before routing the data. They do not take into account whether a resource is online or slow. Simple Routing can also be discounted as it does not take into account the state of the resources.

*When creating a new security group, all inbound traffic is allowed by default.* * False * True

*False* There are slight differences between a normal 'new' Security Group and a 'default' security group in the default VPC. For an 'new' security group nothing is allowed in by default.

*Which is based on temporary security tokens?* (Choose two.) * Amazon EC2 roles * Federation * Username password * Using AWS STS

*Federation* *Using AWS STS* The username password is not a temporary security token.

*For what workload should you consider Elastic Beanstalk?* (Choose two.) * For hosting a relational database * For hosting a NoSQL database * For hosting an application * For creating a website

*For hosting an application* * For creating a website* Elastic Beanstalk is used for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can also host your web site using Elastic Beanstalk. ------------------------- For hosting a database, relational and nonrelational databases are not the right use case for Elastic Beanstalk.

*You are in the process of designing a three-tier architecture for a company. The company wants all the components to be redundant, which means all the EC2 servers need to be redundant in different AZs. You are planning to put the web server in a public subnet and the application and database servers in a private subnet. The database will be hosted on EC2 servers, and you are planning to use two AZs for this. To accomplish this, what is the minimum number of subnets you need?* * Six: one for the web server, one for the application server, and one for the database server in each AZ. * Three: one for the web server, one for the application server, and one for the database server. * Two: One for the web server, one for the application server, and one for the database server in each AZ. * Four: One for the web server and another for the application and database server in each AZ.

*Four: One for the web server and another for the application and database server in each AZ.* The minimum number of subnets you need is four since you will put the web server in a public subnet in each AZ and put the application and database server in a private subnet in each AZ. There is no need to create a separate subnet for the database and application server since that is going to have manageability overhead. ------------------------- If you create three subnets, then there is no redundancy. If you create two subnets, then you have to put all the servers (web, application, and database servers) in the same subnet, which you can't do since the web server needs to go to a public subnet and the application and database server needs to go to a private subnet. Technically, you can create six subnets, but the question is asking for the minimum number of subnets.

*A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants to offsite partial backup of this data while maintaining low-latency access to their frequenly access data. Which AWS Storage Gateway configuration meets the customer requirement?* * Gateway-Cached Volumes with snapshots scheduled to Amazon S3. * Gateway-Stored Volumes with snapshots scheduled to Amazon S3. * Gateway-Virtual Tape Library with snapshots to Amazon S3. * Gateway-Virtual Tape Library with snapshots to Amazon Glacier.

*Gateway-Cached Volumes with snapshots to scheduled to Amazon S3.* Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway-cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequenly accessed data. You can create storage volumes up to 32 TiB in size and atatch to them as iSCSI devcies from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recenlty read data in your on-premisies storage gateway's cache and uplaod buffer storage. Option A is correct, as your primary data is written to S3 while retaining your frequently accessed data locally in a chace for low-latency access. *Note:* The two requirement of the question are low latency access to frequently accessed data and an offsite backup of the data. ----------------------------------- Option B is incorrect, because it is storign the primary data locally (but we have storage constraint hence this is not a viable solution) and your entire dataset is available for low-latency access while asynchronously backed up to AWS. Option C & D are incorrect, it cannot provide low latency access to frequenly accessed data.

*You need to retain all the data for seven years for compliance purposes. You have more than 300TB of data. You are storing everything in the tape drive for compliance, and over the years you have found out it is costing you a lot of money to maintain the tape infrastructure. Moreover, the restore from the tape takes more than 24 hours. You want to reduce it to 12 hours. You have heard that if you move your tape infrastructure to the cloud, it will be much cheaper, and you can meet the SLA you are looking for. What service would you choose?* * Storage Gateway with VTL * S3 * S3 Infrequent Access * Glacier

*Glacier* Glacier is going to provide the best cost benefit. Restoring from Glacier takes about five hours, so you can even meet your SLA. ------------------------- Storing the data in any other place is going to cost you more.

*You work for a health insurance company that amasses a large number of patients' health records. Each record will be used once when assessing a customer, and will then need to be securely stored for a period of 7 years. In some rare cases, you may need to retrieve this data within 24 hours of a claim being lodged. Given these requirements, which type of AWS storage would deliver the least expensive solution?* * Glacier * S3 - IA * S3 - RRS * S3 * S3 - OneZone-IA

*Glacier* The recovery rate is a key decider. The record shortage must be; safe, durable, low cost, and the recovery can be slow. All features of Glacier.

*A new employee has just started work, and it is your job to give her administrator access to the AWS console. You have given her a user name, an access key ID, a secret access key, and you have generated a password for her. She is now able to log in to the AWS console, but she is unable to interact with any AWS services. What should you do next?* * Tell her to log out and try logging back in again. *Ensure she is logging in to the AWS console from your corporate network and not the normal internet. * Grant her Administrator access by adding her to the Administrators' group. * Require multi-factor authentication for her user account.

*Grant her Administrator access by adding her to the Administrators' group.*

*What are the different types of virtualization available on AWS?* (Choose two.) * Cloud virtual machine (CVM) * Physical virtual machine (PVM) * Hardware virtual machine (HVM) * Paravirtual machine (PV)

*Hardware virtual machine (HVM)* *Paravirtual machine (PV)* The two different types of virtualization are hardware virtual machine (HVM) and paravirtual machine (PVM). HVM virtualization provides the ability to run an operating system directly on top of a virtual machine without any modification, as if it were running on the bare-metal hardware. Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing. ------------------------- CVM and PVM are not supported in AWS.

*An application consists of a couple of EC2 Instances. One EC2 Instance hosts a web application and the other Instance hosts the database server. Which of the following changes would be made to ensure high availability of the database layer?* * Enable Read Replicas for the database. * Enable Multi-AZ for the database. * Have another EC2 Instance in the same Availability Zone with replication configured. * Have another EC2 Instances in another Availability Zone with replication configured.

*Have another EC2 Instances in another Availability Zone with replication configured.* Since this is a self-managed database and not an AWS RDS instance, option A and B are incorrect. So here the database server hosted on EC2 instances (self-managed), when you host a database server on EC2 instance, there are no direct options available to enable the read replica and multi-AZ. To ensure high availability, have the EC2 Instance in another Availability Zone, so even if one goes down, the other one will still be available. *Note:* In the

*You are running a Cassandra database that requires access to tens of thousands of low-latency IOPS. Which of the following EC2 instance families could be best suit your needs?* * High I/O instances * Dense Storage Instances * Memory Optimized Instances * Cluster GPU Instances

*High I/O instances* High I/O instances use SSD-based local instance storage to deliver very high, low latency, I/O capacity to applications, and are optimized for applications that require tens of thousands of IOPS.

*Which statement best describes IAM?* * IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform. * IAM stands for Improvised Application Management, and it allows you to deploy and manage applications in the AWS Cloud. * IAM allows you to manage permissions for AWS resources only. * IAM allows you to manage users' passwords only. AWS staff must create new users for your organization. This is done by raising a ticket.

*IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform.*

*Which of the following is not a feature of IAM?* * IAM offers fine-grained access control to AWS resources. * IAM allows you to setup biometric authentication, so that no passwords are required. * IAM integrates with existing active directory account allowing single sign-on. * IAM offers centralized control of your AWS account.

*IAM allows you to setup biometric authentication, so that no passwords are required.*

*Third-party sign-in (Federation) has been implemented in your web application to allow users who need access to AWS resources. Users have been successfully loggin in using Google, Facebook, and other third-party credentials. Suddently, their access to some AWS resources have been restricted. What is the likely cause of restricted use of AWS resources?* * IAM policies for resources were changed, thereby restricting access to AWS resources. * Federation protocols are used to authorize services and needs to be updated. * AWS changed the services allowed to be accessed via federated login. * The identity providers no longer allow access to AWS services.

*IAM policies for resources were change, thereby restricting the access to AWS resources.* When IAM policies are changed, they can impact the user experience and services they can connect to. ----------------------------------- Option B is incorrect, Federation is used to authenticate, not to authorized services. Option C is incorrect, Federation allows for authenticating users, but does not authorize services. Option D is incorrect, the identity providers don't have the capability to authorize services; they authenticate users.

*A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able to access data from another customer. Which solution should the architect use to meet the above requirements?* * IAM roles for tasks. * IAM roles for EC2 Instances. * IAM Instance profile for EC2 Instances. * Security Group rules.

*IAM roles for tasks* With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with WAS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

*Which of following databases is not supported using Amazon RDS?* * IBM DB2 * Microsoft SQL Server * PostgreSQL * Oracle

*IBM DB2* IBM DB2 is not supported in Amazon RDS. ------------------------- Microsoft SQL Server, PostgreSQL, and Oracle databases are supported in Amazon RDS.

*Which of the following databases is not supported using Amazon RDS?* * IBM DB2 * Microsoft SQL Server * PostgreSQL * Oracle

*IBM DB2* IBM DB2 is not supporteed in Amazon RDS. ------------------------- Microsoft SQL Server, PostreSQL, and Oracle databases are supported in Amazon RDS.

*You are speaking with a former colleague who asks you about cloud migrations. The place she is working for runs a large fleet of on-premise Microsoft servers and they are concerned about licensing costs. Which of the following statements is invalid?* * License Mobility allows customers to move eligible Microsoft software to third-party cloud providers such as AWS for use on EC2 instances with default tenancy. * If I bring my own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances, then - subject to Microsoft's terms - Software Assurance is required. * EC2 Bare Metal instances give the customer full control of the configuration of instances just as they have on-premise: The customer has the ability to install a hypervisor directly on the hardware and therefore define and configure their own instance configurations of RAM, disk and vCPU which can minimize additional licensing costs. * AWS License Manager includes features to help our organization manage licenses across AWS and on-premises. With AWS License Manager, you can define licensing rules, track license usage, and enforce controls on license use to reduce the risk of license overages. You can also set usage limits to control licensing costs. There is no additional charge of AWS License Manager.

*If I bring my own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances, then - subject to Microsoft's terms - Software Assurance is required.* If you are bringing your own licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances then Software Assurance is not required subject to Microsoft's term.

What is an additional way to secure the AWS accounts of both the root account and new users alike? * Configure the AWS Console so that you can only log in to it from a specific IP Address range * Store the access key id and secret access key of all users in a publicly accessible plain text document on S3 of which only you and members of your organization know the address to. * Configure the AWS Console so that you can only log in to it from your internal network IP address range. * Implement Multi-Factor Authentication for all accounts.

*Implement Multi-Factor Authentication for all accounts.*

*Individual instances are provisioned ________.* * In Regions * Globally * In Availability Zones

*In Availability Zones*

*You are working as AWS Solutions Architect for a large banking organization. The requirement is that under normal business hours, there would be always 24 web servers up and running in a region (example: US - West (Oregon). It will be a their tiered architecture connecting to the databases. The solution offered should be highly available, secure, cost-effective, and should be able to respond to the heavy requests during peak hours and tolerate up to one AZ failure.* * In a given, use ELB behind two different AZ's, with minimum or desired 24 web servers hosted in a public subnet. And Multi-AZ database architecture in a private subnet. * In a give region, use ELB behind three different AZ's, each AZ having ASG, with a minimum or desired 12 web servers hosted in a public subnet. And Multi-AZ database architecture in a private subnet. * In a given region, use ELB behind two different AZ's, each AZ having ASG, with a minimum or desired 12 web servers hosted in a public subnet. And Multi AZ database architecture in a private subnet. * In a given region, use ELB behind three different AZ's, each AZ having ASG, with minimum or desired 8 web servers hosted in public subnet. And Multi AZ database architecture in a different public subnet.

*In a given region, use ELB behind three different AZ's, each AZ have ASG, with minimum or desired 12 web servers hosted in public subnet. And Multi-AZ database architecture in a private subnet.* As the solution needs to be tolerant up to one AZ failure it means there are always 36 web servers to cater the service requests. If one AZ fails then still there will be a 24 server running all the time, and in case two AZ fails there will always be 12 servers running and ASG can be utilised to scale out the required number of servers. ----------------------------------- Option A is incorrect, as everything looks good, but the designed architecture does not look to be cost effective as all the time 48 servers will be running and it does not have ASG to cater to additional load on servers, however it is fault tolerant to one AZ failure. Besides, it's always a good practice to use multiple AZ's to make the application highly available. Option C is incorrect, as it will not be a suitable solution in case when there will be one AZ failure the other AZ will have only 12 servers running. One might think ASG is always there to take care when the second AZ fails but think of scenario when other AZ fails and at the same time traffic is at its peak, then the application will not be further scalable and users might face slow responses. Option D is incorrect, remember the design principle of keeping the databases in private subnet. As this solution mentions to place databases in other public subnet, the data can be exposed over the internet and hence it's insecure application.

*You have a data warehouse on AWS utilizing Amazon Redshift of 50 Tb. Your data warehouse is located in us-west-1 however you are opening a new office in London where you will be employing some data scientists. You will need a copy of this Redshift cluster in eu-west-2 for performance and latency considerations. What is the easiest way to manage this migration?* * Create a new redshift cluster in eu-west-2. Once provisioned use AWS data pipeline to export the data from us-east-1 to eu-west-2. * Order an AWS Snowball. Export the Redshift data to Snowball and then ship the snowball from us-east-1 to eu-west-2. Load the data into Redshift in London. * Export the data to S3 using Data Pipeline and configure Cross Region Replication to an S3 bucket based in London. Use AWS Lambda to import the data back to Redshift. * In the AWS console go in to Redshift and choose Backup, and then choose Configure Cross-Region Snapshots. Select Copy Snapshot and then choose the eu-west-2 region. Once successfully copied use the snapshot in the new region to create a new Redshift cluster from the snapshot.

*In the AWS console go in to Redshift and choose Backup, and then choose Configure Cross-Region Snapshots. Select Copy Snapshot and then choose the eu-west-2 region. Once successfully copied use the snapshot in the new region to create a new Redshift cluster from the snapshot.* Where AWS provides a service, it is wise to use it rather than trying to create a bespoke service. The AWS service will have been designed and tested to ensure robust and secure transfer taking into account key management and validation.

*As an AWS solution architect, you are building a new image processing application with queuing service. There is fleet of m4.large EC2 instances which would poll SQS as images are uploaded by users. The image processing takes around 55 seconds for completion, and users are notified via emails on completion. During the trial period, you find duplicate messages being generated due to which users are getting multiple mails for the same image. Which of the following is the best option to eliminate duplicate messages before going to production?* * Create a delay queue for 60 seconds. * Increase visibility timeout for 60 seconds. * Create delay queue to greater than 60 seconds. * Decrease visibility timeout below 60 seconds.

*Increase visibility timeout to 60 seconds.* Default visibility timeout is 30 seconds. Since application needs 60 seconds to complete the processing, visibility timeout should be increase to 60 seconds. This will hide message from other consumers for 60 seconds, so they will not process the same file which is in process by original consumer. ----------------------------------- Option A & C are incorrect as Delay queues let you postpone the delivery of new messages to a queue for a number of seconds. Creating delay queue for 60 seconds or more will delay delivery of new message by specific seconds & not eliminate duplicate message. * Option D is incorrect as visibility timeout should be set to maximum time it takes to process & delete message from the queue. If visibility timeout is set to below 60 seconds, message will be again visible to other consumers while original consumer is already working on it.

*Your application is I/O bound, and your application needs around 36,000 IOPS. The application you are running is critical for the business. How can you make sure the application always gets all the IOPS it requests and the database is highly available?* * Install the database in EC2 using an EBS-optimized instance, and choose a I/O optimized instance class with an SSD-based hard drive * Install the database in RDS using SSD * Install the database in RDS in multi-AZ using Provisioned IOPS and select 36,000 IOPS * Install multiple copies of read replicas in RDS so all the workload gets distributed across multiple read replicas and you can cater to the I/O requirement

*Install the database in RDS in multi-AZ using Provisioned IOPS and select 36,000 IOPS* You can choose to install the database in EC2, but if you can get all the same benefits by installing the database in RDS, then why not? If you install the database in SSD, you don't know if you can meet the 36,000 IOPS requirement. A read replica is going to take care of the read-only workload. The requirement does not say the division of read and write IO between 36,000 IOPS.

*You have a legacy application that needs a file system in the database server to write application files. Where should you install the database?* * You can achieve this using RDS because RDS has a file system in the database server * Install the database on an EC2 server to get full control * Install the database in RDS, mount an EFS from the RDS server, and give the EFS mount point to the application for writing the application files * Create the database using a multi-AZ architecture in RDS

*Install the database on an EC2 server to get full control* In this example, you need access to the operating system, and RDS does not give you access to the OS. You must install the database in an EC2 server to get complete control.

*Your organization is in the process of migrating to AWS. Your company has more 10,000 employees, and it uses Microsoft Active Directory to authenticate. Creating an additional 10,000 users in AWS is going to be painful activity for you. But all the users need to use the AWS services. What is the best way of providing them with the access?* * Since all the employees have an account with Facebook, they can use Facebook to authenticate with AWS. * Tell each employee to create a separate account by using their own credit card; this way you don't have to create 10,000 users. * Write a script that can provision 10,000 users quickly. * Integrate AWS with Microsoft Active Directory.

*Integrate AWS with Microsoft Active Directory* Since AWS can be integrated with many identity providers, you should always see whether AWS can be integrated with you existing identity providers. ----------------------------------- You should never encourage employees to create a personal account for official purpose since it can become a management nightmare. You can automate the provisioning of 10,000 users, but now you have to manage the 10,000 users at both places.

*What is AWS Storage Gateway?* * It allows large scale import/exports in to the AWS cloud without the use of an internet connection. * None of the above. * It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site. * It allows a direct MPLS connection in to AWS.

*It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site.* At its heart it is a way of using AWS S3 managed storage to supplement on-premise storage. It can also be used within a VPC in a similar way.

*An IAM policy takes which form?* * Python script * Written in C language * JSON code * XML code

*JSON code* It is written in JSON.

*In what language are policy documents written?* * JSON * Python * Java * Node.js

*JSON*

*What are the languages that AWS Lambda supports?* (Choose two.) * Perl * Ruby * Java * Python

*Java* *Python* Perl and Ruby are not supported by Lambda.

*You've implemented AWS Key Management Service to protect your data in your application and other AWS services. Your global headquarters is in Northern Virgina (US East (N. Virginia)) where you created your keys and have provided the appropriate permission to designated users and specific roles within your organization. While the N. America users are not having issues, German and Japanese users are unable to get KMS to function. What is the most likely cause?* * KMS is only offered in North America. * AWS CloudTrail has not been abled to log events. * KMS master keys are region-specific and the applications are hitting the wrong API endpoints. * The master keys have been disabled.

*KMS master keys are region-specific and the applications are hitting the wrong api endpoints.* This is the most likely cause. The application should be sure to hit correct region endpoint. ----------------------------------- Option A is incorrect, KMS is offered in several regions, but keys are not transferrable out of the region they were created in. Option B is incorrect, CloudTrail is recommended for auditing but is not required. Option D is incorrect, The keys are working as expected where they were created; keys are region specific.

*Which of the following does Amazon DynamoDB support?* (Choose two.) * Graph database * Key-value database * Document database * Relational database

*Key-value database* *Document database* Amazon DynamoDB supports key-value and document structures. It is not a relational database. It does not support graph databases.

*What product should you use if you want to process a lot of streaming data?* * Kinesis Firehouse * Kinesis Data Stream * Kinesis Data Analytics * API Gateway

*Kinesis Data Stream* Kinesis Data Firehose is used mainly for large amounts of non streaming data, Kinesis Data Analytics is used for transforming data, and API Gateway is used for managing APIs.

*You are creating a data lake in AWS. In the data lake you are going to ingest the data in real time and would like to perform fraud processing. Since you are going analyze fraud, you need a response within a minute. What service should you use to ingest the data in real time?* * S3 * Kinesis Data Firehose * Kinesis Data Analytics * Kinesis Data Streams

*Kinesis Data Streams* Kinesis Data Streams allows for real-time data processing. ------------------------- S3 is used for storing data. Kinesis Data Firehose is used for loading batch data. Kinesis Data Analytics is used for transforming data during ingestion.

*You are creating a data lake in AWS, and one of the use cases for a data lake is a batch job. Which AWS service should you be using to ingest the data for batch jobs?* * Kinesis Streams * Kinesis Analytics * Kinesis Firehose * AWS Lambda

*Kinesis Firehose* Since the use cas is a batch, Kinesis Firehose should be used for data ingestion. ----------------------------------- Kinesis Streams is used to ingest real-time data and streams of data, whereas Kinesis Analytics is used to transform the data at the time of ingestion. AWS Lambda is sued to run your code and can't be used to ingest data. However, you can write an AWS Lambda function to trigger the next step once the data ingestion is complete.

*Which product is not a good fit if you want to run a job for ten hours?* * AWS Batch * EC2 * Elastic Beanstalk * Lambda

*Lambda* Lambda is not a good fit because the maximum execution time for code in Lambda is five minutes. Using Batch you can run your code for as long as you want. Similarly, you can run your code for as long as you want on EC2 servers or by using Elastic Beanstalk.

*You want to subscribe to a topic. What protocol endpoints are available in SNS for you?. (Choose three.) * Lambda * E-mail * HTTP * Python

*Lambda* *E-mail* *HTTP* The SNS endpoints are Lambda, e-mail, and HTTP. ----------------------------------- Python is not an endpoint, it is a programming language.

*Where do you define the details of the type of servers to be launched when launching the servers using Auto Scaling?* * Auto Scaling group * Launch configuration * Elastic Load Balancer * Application load balancer

*Launch configuration* You define the type of servers to be launched in the launch configuration. The Auto Scaling group is used to define the scaling policies, Elastic Load Balancing is used to distribute the traffic across multiple instances, and the application load balancer is used to distribute the HTTP/HTTS traffic at OSI layer 7.

*You are running a fleet of EC2 instances for a web server, and you have integrated them with Auto Scaling. Whenever a new server is added to the fleet as part of Auto Scaling, your security team wants it to have the latest OS security fixes. What is the best way of achieving this objective?* * Once Auto Scaling launches a new EC2 instance, log into it and apply the security updates. * Run a cron job on a weekly basis to schedule the security updates. * Launch the instance with a bootstrapping script that is going to install the latest update. * No action is needed. Since Auto Scaling is going to launch the new instance, it will already have all the security fixes pre-installed in it.

*Launch the instance with a bootstrapping script that is going to install latest update.* Whenever Auto Scaling creates a new instance, it picks all the configuration details from the Auto Scaling group, and therefore you don't have to do anything manually. ------------------------- If you log in to the the new EC2 instance manually and apply the security updates manually, this model may not scale well because what if your business wants to launch thousands of servers at the same time? It is possible to log in to all these servers manually and update the security fixes? Also, what if the instances are launched during the night by Auto Scaling, which logs in manually and applies the updates? Similarly, if you run a cron job, you will be scheduling the security fix for a particular time. What if the instance are launched at a different time? Bootstrapping scripts with an update action will make sure the instance has all the security fixes before it is released for use. Even Auto Scaling launches a new instance, it is not necessary that the instance will have all the security fixes in it. The instance will have the security fixes when the AMI was last updated.

*On which layer of the Open Systems Interconnection model does the application load balancer perform?* * Layer 4 * Layer 7 * Layer 3 * Layer 5

*Layer 7* An application load balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. ------------------------- A network load balancer functions at the fourth layer of the OSI model, whereas a classic load balancer operates at both layer 4 and layer 7 of the OSI model.

*You are deploying an application in multiple EC2 instances in different AZs and will be using ELB and Auto Scaling to scale up and scale down as per the demand. You are planning to store the session information in DynamoDB. Since DynamoDB has a public endpoint and you don't want to give Internet access to the application server, what is the most secure way application server can talk with DynamoDB?* * Create a NAT instance in a public subnet and reach DynamoDB via the NAT instance. * Create a NAT gateway and reach DynamoDB via the NAT gateway. * Create a NAT instance in each AZ in the public subnet and reach DynamoDB via the NAT instance. * Leverage the VPC endpoint for DynamoDB.

*Leverage the VPC endpoint for DynamoDB* Amazon DynamoDB also offers VPC endpoints using which you can secure the access to DynamoDB. The Amazon VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public Internet. ------------------------- NAT instances and NAT gateways can't be leveraged for the communication between the EC2 server and DynamoDB.

*You need to automatically migrate objects from one S3 storage class to another based on the age of the data. Which S3 service can you use to achieve this?* * Glacier * Infrequent Access * Reduced Redundancy * Lifecycle Management

*Lifecycle Management* S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set lifecycle transition policy to automatically migrate Amazon S3 objects to Standard - Infrequent Access (Standard - IA) and/or Amazon Glacier based on the age of the data.

*Your company has a set of EBS volumes and a set of adjoining EBS snapshots. They want to minimize the costs of the underlying EBS snapshots. Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?* * Maintain two snapshots: the original snapshots and the latest incremental snapshot. * Maintain a volume snapshot; subsequent snapshots will overwrite one another * Maintain a single snapshot: the latest snapshot is both incremental and complete * Maintain the most current snapshot, archive the original incremental to Amazon Glacier

*Maintain a single snapshot: the latest snapshot is both Incremental and complete* You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in time snapshots. Snapshot are incremental backup, which means that only the block on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

*An application currently uses AWS RDS MySQL as its data layer. Due to recent performance issues on the database, it has been decided to separate the querying part of the application by setting up a separate reporting layer. Which of the following additional steps could also potentially assist in improving the performance of the underlying database?* * Make use of Multi-AZ to setup a secondary database in another Availability Zone. * Make use of Multi-AZ to setup a secondary database in another region. * Make use of Read Replicas to setup a secondary read-only database. * Make use of Read Replicas to setup a secondary read and write database.

*Make use of Read Replicas to setup a secondary read-only database.* Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a single DB instance for read-heavy database workloads. You can create one or more replicas of the given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

*What are the two different in-memory key-value engines Amazon ElastiCache currently supports?* (Choose two.) * Oracle in-memory database * SAP HANA * Memcached * Redis

*Memcached* *Redis* Amazon Elasticache currently supports Memcached and Redis. ----------------------------------- Amazon ElastiCache does not support Oracle in-memory databases or SAP HANA.

*What are the two in-memory key-value engines that Amazon ElastiCache supports?* (Choose two.) * Memcached * Redis * MySQL * SQL Server

*Memcached* *Redis* MySQL and SQL Server are relational databases and not in-memory engines.

*You create a standard SQS queue and test it by creating a simple application that polls the queue for messages. After a message is retrieved, the application should delete it. You create three test messages in your SQS queue and discover that messages 1 and 3 are quickly deleted, but message 2 has remained in the queue. Which of the following could account for your findings?* (Choose 2) * Message 2 is invalid * Standard SQS queues cannot guarantee that messages are retrieved in first-in, first-out (FIFO) order. * The permissions on message 2 were incorrectly written. * Your application uses short-polling.

*Message 2 is invalid* *Your application uses short-polling.* With short-polling, multiple polls of the queue may be necessary to find all messages on the various nodes in the queue. The queue not being FIFO may impact the order, but not the eventual successful processing. SQS has options to control access to create messages and retrieve them. However these are not per-message controls. That just leaves the possibility that it is malformed message.

*You are consulting to a mid-sized company with a predominantly Mac & Linux desktop environment. In passing they comment that they have over 30TB of unstructured Word and spreadsheet documents of which 85% of these documents don't get accessed again after about 35 days. They wish that they could find a quick and easy solution to have tiered storage to store these documents in a more cost effective manner without impacting staff access. What options can you offer them?* (Choose 2) * Migrate documents to EFS storage and make use of life-cycle using Infrequent Access storage. * Migrate the document store to S3 storage and make use of life-cycle using Infrequent Access storage. * Migrate documents to File Gateway presented as iSCSI and make use of life-cycle using Infrequent Access storage. * Migrate documents to File Gateway presented as NFS and make use of life-cycle using Infrequent Access storage.

*Migrate documents to EFS storage and make use of life-cycle using Infrequent Access storage.* *Migrate documents to File Gateway presented as NFS and make use of life-cycle using Infrequent Access storage.* ----------------------------------- Trying to use S3 without File Gateway in front would be a major impact to the user environment. Using File Gateway is the recommended way to use S3 with shared document pools. Life-cycle management and Infrequent Access storage is available for both S3 and EFS. A restriction however is that 'Using Amazon EFS with Microsoft Windows is not supported'. File Gateway does not support iSCSI in the client side.

*An application currently using a NAT Instance is required to use a NAT Gateway. Which of the following can be used to accomplish this?* * Use NAT Instances along with the NAT Gateway. * Host the NAT Instance in the private subnet. * Migrate from NAT Instance to a NAT Gateway and host the NAT Gateway in the public subnet. * Convert the NAT Instance to a NAT Gateway.

*Migrate from a NAT Instance to a NAT Gateway and host the NAT Gateway in the public subnet.* One can simply start and stop using the NAT Gateway service using the deployed NAT instances. But you need to ensure that the NAT Gateway is deployed in the public subnet.

*While reviewing the Auto Scaling events for your application, you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize costs while preserving elasticity?* (Choose 2) * Modify the Auto Scaling group termination policy to terminate the older instance first. * Modify the Auto Scaling group termination policy to terminate the newest instance first. * Modify the Auto Scaling group cool down timers. * Modify the Auto Scaling group to use Scheduled Scaling actions. * Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy.

*Moify the Auto Scaling group cool down timers.* *Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy.* Here not enough time is being given for the scaling activity to take effect and for the entire infrastructure to stabilize after the scaling activity. This can be take care of by increasing the Auto Scaling group CoolDown timers. You will also have to define the right threshold for the CloudWatch alarm for triggering the scale down policy.

*Which database engine is not supported in Amazon RDS?* * Oracle * SQL Server * PostgreSQL * MongoDB

*MongoDB* MongoDB is not supported in Amazon RDS. However, you can host it using EC2 servers. ----------------------------------- Oracle, SQL Server, and PostgreSQL are supported database engines in RDS.

*You have an I/O-intensive database in your production environment that requires regular backups. You need to configure it in such a way so that when an automated backup is taken, it does not impact your production environment. Which RDS option should you choose to help you accomplish this?* * Read Replicas * Cross Region Failover * Use Redshift for your backup environment. * Multi-AZ

*Multi-AZ* With Multi-AZ RDS instances and automated backups, I/O activity is no longer suspended on your primary during your preferred backup window, since backups are taken from the standby.

*When coding a routine to upload to S3, you have the option of using either single part upload or multipart upload. Identify all the possible reasons below to use Multipart upload.* * Multipart upload delivers quick recovery from network issues. * Multipart upload delivers the ability to pause and resume object uploads. * Multipart upload delivers the ability to append data into an open data file. * Multipart upload delivers improved security in transit. * Multipart upload delivers improved throughput. * Multipart upload delivers the ability to begin an upload before you know the final object.

*Multipart upload delivers quick recovery from network issues.* *Multipart upload delivers the ability to pause and resume object uploads.* *Multipart upload delivers improved throughput.* *Multipart upload delivers the ability to being an upload before you know the final object size.* Multipart upload provides options for more robust file upload in addition to handling larger files than single part upload.

A company hosts 5 web servers in AWS. They want to ensure that Route53 can be used to route user traffic to random healthy web servers when they request for the underlying web application. Which routing policy should be used to fulfill this requirement?* * Simple * Weighted * Multivalued Answer * Latency

*Multivalue Answer* If you want to route traffic approximately randomly to multiple resources such as a web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. For example, suppose you manage an HTTP web service with a dozen web servers that each have their own IP address, no one web server could handle all of the traffic, but if you create a dozen multivalue answer records, Amazon Route 53 responds to DNS queries with up to eight healthy records in response to each DNS query. Amazon Route 53 gives different answers to different DNS resolvers. If a web servers becomes unavailable after a resolver caches a response, client software can try another IP address in the response. Multivalue answer routing policy - Use when you want to Route 53 to respond DNS queries with up to eight healthy records selected at random. ----------------------------------- Simple routing policy - Use for a single resource that performs a given function for your domain, for example, a web servers content for the example.com website. Latency routing policy - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. Weighted routing policy - Use to route traffic to multiple resources in proportions that you specify.

*Your company hosts 10 web servers all serving the same web content in AWS. They want Route 53 to serve traffic to random web servers. Which routing policy will meet this requirement, and provide the best resiliency?* * Weighted Routing * Simple Routing * Multivalue Routing * Latency Routing

*Multivalue Routing* Multivalue answer routing lets you configure Amazon Route 53 to return multiple values, such as IP addresses for your web servers, in response to DNS queries. Route 53 responds to DNS queries with up to eight healthy records and gives different answers to different DNS resolvers. The choice of which to use is left to the requesting service effectively creating a form of randomization.

*Your company is planning on setting up a VPC with private and public subnets and then hosting EC2 Instances in the subnet. It has to be ensured that instances in the private subnet can download updates from the internet. Which of the following needs to be part of the architecture?* * WAF * Direct Connect * NAT Gateway * VPN

*NAT Gateway* You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. ----------------------------------- Option A is invalid since this is a web application firewall. Option B and D are invalid since this are used to connect on-premises infrastructure to AWS VPC's.

*An AWS VPC is a component of which group of AWS services?* * Database Services * Networking Services * Compute Services * Global Infrastructure

*Networking Services* A Virtual Private Cloud (VPC) is a virtual network dedicated to a single AWS account. ----------------------------------- It is logically isolated from other virtual networks in the AWS cloud, providing compute resources with security and robust networking functionality.

*Every user you create in the IAM systems starts with ________.* * Full Permissions * No Permissions * Partial Permissions

*No Permissions*

*Every user you create in the IAM systems starts with ________.* * Partial Permissions * No Permissions * Full Permissions

*No Permissions*

*What is the default level of access a newly created IAM User is granted?* * Administrator access to all AWS services. * Power user access to all AWS services. * No access to any AWS services. * Read only access to all AWS services.

*No access to any AWS services.*

*Can I delete a snapshot of an EBS Volume that is used as the root device of a registered AMI?* * Only using the AWS API. * Yes. * Only via the Command Line. * No.

*No*

*To save administration headaches, a consultant advises that you leave all security groups in web facing subnets open on port 22 to 0.0.0.0/0 CIDR. That way, you can connect wherever you are in the world. Is this a good security design?* * Yes * No

*No* 0.0.0.0/0 would allow ANYONE from ANYWHERE to connect to your instances. This is generally a bad plan. The phrase 'Web facing subnets..' does not mean just web servers. It would include any instances in that subnet some of which you may not strangers attacking. You would only allow 0.0.0.0/0 on port 80 or 443 to to connect to your public facing Web Servers, or preferably only to an ELB. Good security starts by limiting public access to only what the customer needs.

*Can I move a reserved instance from one region to another?* * It depends on the region. * Only in the US. * Yes. * No.

*No* Depending on you type of RIs you can modify the; AZ, scope, network platform, or instance size (within the same instance type). But not Region. In some circumstances you can sell RIs, but only if you have a US bank account.

*Can you add an IAM role to an IAM group?* * Yes * No * Yes, if there are ten members in the group * Yes, if the group allows adding a role

*No* No, you can't add an IAM role to an IAM group.

*Can you attach an EBS volume to more than one EC2 instance at the same time?* * Yes * Depends on which region. * No. * If that EC2 volume is part of an AMI

*No.*

*What are the two languages AWS Lambda supports? * C++ * Node.js * Ruby * Python

*Node.js* *Python* Lambda supports Node.js and Python. In addition, it supports Java and C#. ------------------------- Lambda does not support C++ and Ruby.

*Which load balancer is not capable of doing the health check?* * Application load balancer * Network load balancer * Classic load balancer * None of the above

*None of the above* All the load balancers are capable of doing a health check.

*You are using an EC2 instance to process a message that is retrieved from the SQS queue. While processing the message, the EC2 instance dies. What is going to happen to the message?* * The message keeps on waiting until the EC2 instance comes back online. Once the EC2 instance is back online, the processing restarts. * The message is deleted from the SQS queue. * Once the message visibility timeout expires, the message becomes available for processing by another EC2 instance. * SQS knows that the EC2 server has terminated. It re-creates another message automatically and resends the request.

*Once the message visibility timeout expires, the message becomes available for processing by another EC2 instance.* Using message visibility, you can define when the message is available for reprocessing. ------------------------- Amazon SQS doesn't automatically delete the message. Because Amazon SQS is a distributed system, there is no guarantee that the consumer actually receives the message. Thus, the consumer must delete the message from the queue after receiving and processing it. Amazon SQS sets a visibility timeout, which is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. If the consumer fails before deleting the message and your system doesn't call the DeleteMessage action for that message before the visibility timeout expires, the message becomes visible to other consumers, and the message is received again.

*You work for a major news network in Europe. They have just released a new mobile app that allows users to post their photos of newsworthy events in real time. Your organization expects this app to grow very quickly, essentially doubling its user base each month. The app uses S3 to store the images, and you are expecting sudden and sizable increases in traffic to S3 when a major news event takes place (as users will be uploading large amounts of content.) You need to keep your storage costs to a minimum, and it does not matter if some objects are lost. With these factors in mind, which storage media should you use to keep costs as low as possible?* * S3 - Provisioned IOPS * S3 - One Zone-Infrequent Access * S3 - Infrequently Accessed Storage * Glacier * S3 - Reduced Redundancy Storage (RRS)

*One Zone-Infrequent Access* The key driver here is cost, so an awareness of cost is necessary to answer this. ----------------------------------- Full S3 is quite expensive at around $0.023 per GB for the lowest band. S3 standard IA is $0.0125 per GB, S3 One-Zone-IA is $0.01 per GB, and Legacy S3-RRS is around $0.024 per GB for the lowest band. Of the offered solutions SS3 One-Zone-IA is the cheapest suitable option. Glacier cannot be considered as it is not intended for direct access, however it comes in at around $0.004 per GB. Of course you spotted that RRS is being deprecated, and there is no such thing as S3 - Provisioned IOPS. In this case OneZone IA should be fine as users will 'post' material but only the organization will access it and only to find relevant material. The question states that there is no concern if some material is lost.

*Will an Amazon EBS root volume persist independently from the life of the terminated EC2 instance to which it was previously attached? In other words, if I terminated an EC2 instance, would that EBS root volume persist?* * It depends on the region in which the EC2 instance is provisioned. * Yes. * Only if I specify (using either the AWS Console or the CLI) that it should do so. * No.

*Only if I specify (using either the AWS Console or the CLI) that it should do so* *You can control whether an EBS root volume is deleted when its associated instance is terminated.* The default delete-on-termination behaviour depends on whether the volume is a root volume, or an additional volume. By default, the DeleteOnTermination attribute for root volumes is set to 'true.' However, this attribute may be changed at launch by using either the AWS Console or the command line. For an instance that is already running, the DeleteOnTermination attribute must be changed using the CLI.

*Which RDS engine does not support read replicas?* * MySQL * Aurora MySQL * PostgreSQL * Oracle

*Oracle* Only RDS Oracle does not support read replicas; the rest of the engines do support it.

*You have been designing a CloudFormation template that creates one elastic load balancer fronting two EC2 instances. Which section of the template should you edit so that the DNS of the load balancer is returned upon creation of the stack?* * Resources * Parameters * Outputs * Mappings

*Parameters* ----------------------------------- Option A is incorrect because this is used to define the main resources in the template. Option B is incorrect because this is used to define parameters which can taken in during template deployment. Option D is incorrect because this used to map key value pairs in a template.

*You've been tasked with migrating an on-premise application architecture to AWS. During the Design process, you give consideration to current on-premise security and identify the security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model?* (Choose 2) * User access to the AWS environment * Physical network infrastructure * Instance security * Virtualization Infrastructure

*Physical network infrastructure* *Virtualization Infrastructure* Understanding the AWS Shared Responsibility Model will help you answer quite a few exam questions by recognizing false answers quickly.

*A company is planning on setting up a web-based application. They need to ensure that users across the world have the ability to view the pages from the web site with the least amount of latency. How can you accomplish this?* * Use Route 53 with latency-based routing * Place a cloudfront distribution in front of the web application * Place an Elastic Load balancer in front of the web application * Place an Elastic Cache in front of the web application

*Place a cloudfront distribution in front of the web application.* Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds. CloudFront is integrated with AWS - including physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda Edge to run custom code close to your viewers. ----------------------------------- Option A is incorrect since this is used for multiple sites and latency based routing between the sites. Option C is incorrect since this is used for fault tolerance for the web application. Option D is incorrect since this is used for caching requests in front of database layer.

*You have a web application hosted on an EC2 Instance in AWS which is being accessed by users across the globe. The Operations team has been receiving support requests about extreme slowness from users in some regions. What can be done to the architecture to improve the response time for these users?* * Add more EC2 Instances to support the load. * Change the Instance type to a higher instance type. * Add Route 53 health checks to improve the performance. * Place the EC2 Instance behind CloudFront.

*Place the EC2 Instance behind the CloudFront.* Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best performance. ----------------------------------- Option A is incorrect, the latency issue is experienced by people from certain parts of the world only. So, increasing the number of EC2 Instances or increasing the instance size does not make much of a difference. Option C is incorrect, Route 53 health checks are meant to see whether the instance status is healthy or not. Since the case deals with responding to request from users, we do not have to worry about this. However, for improving latency issues, CloudFront is a good solution.

*You plan on hosting a web application on AWS. You create an EC2 Instance in a public subnet which needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be taken to ensure that a secure setup is in place?* (Choose 2) * Place the EC2 Instance with the Oracle database in the same public subnet as Web server for faster communication. * Place the EC2 Instance with the Oracle database in a separate private subnet. * Create a database Security group which allows incoming traffic only from the Web server's security group. * Ensure that the database security group allows incoming traffic from 0.0.0.0/0

*Place the EC2 Instance with the Oracle database in a separate private subnet.* *Create a database Security group which allows incoming traffic only from the Web server's security group.* The best and most secure option is to place the database in a private subnet. The below diagram from AWS Documentation shows this setup. Also, you ensure that access is not allowed from all sources but only the web servers. ----------------------------------- Option A is incorrect because as per the best practice guidelines, db instances are placed in Private subnets and allowed to communicate with web servers in the public subnet. Option D is incorrect, because allowing all incoming traffic from the Internet to the db instance is a security risk.

*A company has an application that delivers objects from S3 to users. Of late, some users spread across the globe have been complaining of slow response times. Which of the following additional steps would help in building a cost-effective solution and also help ensure that the users get an optimal response to object from S3?* * Use S3 Replication to replicate the objects to regions closest to the users. * Ensure S3 Transfer Acceleration is enabled to ensure all users get the desired response times. * Place an ELB in front of S3 to distribute the load across S3. * Place the S3 bucket behind a CloudFront distribution.

*Place the S3 bucket behind a CloudFront distribution* If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon Se, which will reduce your costs. For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3 and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3. ----------------------------------- Option A and B are incorrect, S3 Cross-Region Replication and Transfer Acceleration incurs costs. Option C is incorrect, ELB is used to distribute traffic on the EC2 Instances.

*An application reads and writes objects to an S3 bucket. When the application is fully deployed, the read/write traffic is expected to be 5,000 requests per second for the addition of data and 7,000 requests per second to retrieve data. How should the architect maximize the Amazon S3 performance?* * Use as many S3 prefixes as you need to paralel to achieve the required throughput. * Use the STANDARD_IA storage class. * Prefix each object name with a hex hash key along with the current date. * Enable versioning on the S3 bucket.

*Prefix each object name with a hex hash key along with the current date.* *NOTE:* Base on the S3 new performance announcement. "S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance." But Amazon exam questions and answers not yet updated. So Option C is correct answer as per AWS exam. This recommendation for increasing performance in case of a high request rate in S3 is given in the documentation.

*You application's has a rapid upscale and usage and usage peaks at 90% during the hours of 9 AM and 10 AM everyday. All other hours require only 10% of the peak resources. What is the best way to scale your application so you're only paying for max resources during peak hours?* * Proactive cyclic scaling * Reactive cyclic scaling * Reactive event-based scaling * Proactive event-based scaling

*Proactive cyclic scaling* Proactive cyclic scaling is scaling that occurs at a fixed interval (daily, weekly, monthly, quarterly). The proactive approach can be very effective when the upscale is large and rapid and you cannot wait for the delays of a sequence of auto-scaling steps.

*A company is required to use the AWS RDS service to host a MySQL database. This database is going to be used for production purposes and is expected to experience a high number of read/write activities. Which of the below underlying EBS Volume types would be ideal for this database?* * General Purpose SSD * Provisioned IOPS SSD * Throughput Optimized HDD * Cold HDD

*Provisioned IOPS SSD* Highest performance SSD volume for mission-critical low-latency or high throughput workloads. Critical business application that requires sustained IOPS performance or more than 10,000 IOPS or 160 MiB/s of throughput per volume. Large database workloads.

*A company wants to host a selection of MongoDB instances. They are expecting a high load and wants to have a low latency as possible. As an architect, you need to ensure that the right storage is used to host the MongoDB database. Which of the following would you incorporate as the underlying storage layer?* * Provisioned IOPS * General Purpose SSD * Throughput Optimized HDD * Cold HDD

*Provisioned IOPS* High performance SSD volume designed for low latency-sensitive transactional workloads.

*A Solution Architect is designing an online shopping application running in a VPC on EC2 Instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements?* * Public subnets for both the application tier and the database cluster. * Public subnets for the application tier, and private subnets for the database cluster. * Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster. * Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway.

*Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster.* We always need to keep the NAT gateway on public Subnet only, because it needs to communicate to the internet. AWS says that "to create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you've created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the internet. *Note:* Here the requirement is that *There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.* 1) *There should be no access to the database from the Internet* To achieve this step, we have to launch the database inside the private subnet. 2) *But the cluster must be able to obtain software patches from the Internet.* For this, we have to create NAT Gateway inside the *Public Subnet.*. Because thesubnet with internet gateway attached is known as Public Subnet. Through the NAT Gateway, a database inside the Private subnet can access the internet. *Option D is saying that "User private subnet for NAT gateway".* So, Option C is having these discussed Points and it's a perfect answer.

*You company has been running its core application on a fleet of r4.xlarge EC2 instances for a year. You are confident that you understand the application steady-state performance and now you have been asked to purchased Reserved Instances (RIs) for a further 2 years to cover the existing EC2 instances, with the option of moving to other Memory or Compute optimised instance families when they are introduced. You also need to have the option of moving Regions in the future. Which of the following options meet the above criteria whilst offering the greatest flexibility and maintaining the best value for money.* * Purchase a 1 year Standard Zonal RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace. * Purchase a 1 year Convertible RI for each EC2 instance, for 2 consecutive years running. * Purchase a Convertible RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace. * Purchase a Scheduled RI for 3 years, then sell the unused RI on the Reserved Instance Marketplace.

*Purchase 1 year Convertible RI for each EC2 instance, for 2 consecutive years running.* When answering this question, it's important to exclude those options which are not relevant, first. The question states that the RI should allow for moving between instance families and this immediately rules out Standard and Scheduled RIs as only Convertible RIs can do this. Of the 2 Convertible RI options, the first can be ruled out as it suggests selling unused RI capacity on the Reserved Instance Marketplace, but this is not available for Convertible RIs and therefore that only leaves one answer as being correct.

*You require the ability to analyze a customer's clickstream data on a website so they can do a behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data?* * Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce. * Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers. * Write click event directly to Amazon Redshift and then analyze with SQL. * Publish web clicks by session to an Amazon SQS queue. Then send the events to AWS RDS for further processing.

*Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers.* Amazon Kinesis Data Streams enables to build custom application that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as a website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events.

*Your organization already had a VPC(10.10.0.0/16) setup with one public(10.10.1.0/24) and two private subnets - private subnet 1 (10.10.2.0/24) and private subnet 2 (10.10.3.0/24). The public subnet has the main route table and two private subnets have two different route tables respectively. AWS sysops team reports a problem stating the EC2 instance in private subnet 1 cannot communicate to RDS MySQL database which is on private subnet 2. What are the possible reasons?* (Choose 2) *One of the private subnet route table's local route has been changed to restrict only within the subnet IP range. * RDS security group inbound rule is incorrectly configured with 10.10.1.0/24 instead of 10.10.2.0/24 * 10.10.3.0/24 subnet's NACL is modified to deny inbound on port 3306 from subnet 10.10.2.0/24 * RDS Security group outbound does not contain a rule for ALL traffic or port 3306 for 10.10.2.0/24 IP range.

*RDS security group inbound rule is incorrectly configured with 10.10.1.0/24 instead of 10.10.2.0/24* *10.10.3.0/24 subnet's NACL is modified to deny inbound on port 3306 from subnet 10.10.2.0/24* For Option B, possible because security group is configured with public subnet IP range instead of private subnet 1 IP range and EC2 is in private subnet 1. So EC2 will not be able to communicate with RDS in private subnet 2. ----------------------------------- For Option A, for any route table, the local route cannot be edited or deleted. Every route table contains a local route for communication within the VPC over IPv4 CIDR block. If you've associated an IPv6 CIDR block with your VPC, your route tables contain a local route for the IPv6 CIDR block. You cannot modify or delete these routes. Option D is not correct because Security Group are stateful - if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allow inbound traffic are allowed to flow out, regardless of outbound rules.

*Which of the below are database services from AWS?* (Choose 2) * RDS * S3 * DynamoDB * EC2

*RDS* *DynamoDB* RDS is a service for relational database provided by AWS. DynamoDB is AWS' fast, flexible, no-sql database service. ----------------------------------- S3 provides the ability to store files in the cloud and is not suitable for databases, while EC2 is part of the compute family of services.

*S3 has what consistency model for PUTS of new objects* * Write After Read Consistency * Eventual Consistency * Read After Write Consistency * Usual Consistency

*Read After Write Consistency*

*Which product is not serverless?* * Redshift * DynamoDB * S3 * AWS Lambda

*Redshift* DynamoDB, S3, and AWS Lambda all are serverless.

*What is each unique location in the world where AWS has a cluster of data centers called?* * Region * Availability zone * Point of presence * Content delivery network

*Region* ----------------------------------- AZs are inside a region, so they are not unique. POP and content delivery both serve the purpose of speeding up distribution.

*You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?* * Remove public read access and use signed URLs with expiry dates. * Use CloudFront distributions for static content. * Block the IPs of the offending websites in Security Groups. * Store photos on an EBS volume of the web server.

*Remove public read access and use signed URLs with expiry dates.* ----------------------------------- Option B is incorrect, because CloudFront is only used for the distribution of content across edge or region locations, and not for restricting access to content. Option C is not feasible, because of their dynamic nature, blocking IPs is challenging and you will not know how which sites are accessing your main site. Option D is incorrect since storing photos on an EBS Volume is neither a good practice nor an ideal architectural approach for an AWS Solutions Architect.

*Your company is running a photo sharing website. Currently all the photos are stored in S3. At some point the company finds out that other sites have been linking to the photos on your site, causing loss to your business. You need to implement a solution for the company to mitigate this issue. Which of the following would you look at implementing?* * Remove public read access and use signed URLs with expiry dates. * Use Cloud Front distributions for static content. * Block the IPs of the offending websites in Security Groups. * Store photos on an EBS volume of the web server.

*Remove public read access and use signed URLs with expiry dates.* A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permissions to upload that object. ----------------------------------- Option B is incorrect since Cloud front is only used for distribution of content across edge of region locations. It is not used for restricting access to content. Option C is incorrect since Blocking IP's is challenging because they are dynamic in nature and you will not know which sites are accessing your main site. Option D is incorrect since Storing photos on EBS volume is not a good practice or architecture approach for an AWS Solution Architect.

*You run a popular photo sharing website that depends on S3 to store content. Paid advertising is your primary source of revenue. However, you have discovered that other websites are linking directly to the images in your buckets, not to the HTML pages that serve the content. This means that people are not seeing the paid advertising, and you are paying AWS unnecessarily to serve content directly from S3. How might you resolve this issue?* * Use security groups to blacklist the IP addresses of the sites that link directly to your S3 bucket. * Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates. * Use CloudFront to serve the static content. * Use EBS rather than S3 to store the content.

*Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates.*

You need to know both the private IP address and public IP address of your EC2 instance. You should ________. * Use the following command: AWS EC2 DisplayIP. * Retrieve the instance User Data from http://169.254.169.254/latest/meta-data/. * Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/. * Run IPCONFIG (Windows) or IFCONFIG (Linux).

*Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/.* Instance Metadata and User Data can be retrieved from within the instance via a special URL. Similar information can be extracted by using the API via the CLI or an SDK.

*You run a meme creation website that stores the original images in S3 and each meme's meta data in DynamoDB. You need to decide upon a low-cost storage option for the memes, themselves. If a meme object is unavailable or lost, a Lambda function will automatically recreate it using the original file from S3 and the metadata from DynamoDB. Which storage solution should you use to store the non-critical, easily reproducible memes in the most cost effective way?* * S3 - 1Zone-IA * S3 - RRS * S3 - IA * Glacier * S3

*S3 - 1Zone-IA* S3 - OneZone-IA is the recommended storage for when you want cheaper storage for infrequently accessed objects. It has the same durability but less availability. There can be cost implications if you use it frequently or use it for short lived storage. ----------------------------------- Glacier is cheaper, but has a long retrieval time. RRS has effectively been deprecated. It still exists but is not a service that AWS want to sell anymore.

*You work for a busy digital marketing company who currently store their data on premise. They are looking to migrate to AWS S3 and to store their data in buckets. Each bucket will be named after their individual customers, followed by a random series of letters and numbers. Once written to S3 the data is rarely changed, as it has already been sent to the end customer for them to use as they see fit. However on some occasions, customers may need certain files updated quickly, and this may be for work that has been done months or even years ago. You would need to be able to access this data immediately to make changes in that case, but you must also keep your storage costs extremely low. The data is not easily reproducible if lost. Which S3 storage class should you choose to minimise costs and to maximize retrieval times?* * S3 * S3 - IA * S3 - RRS * Glacier * S3 - 1Zone-IA

*S3 - IA* The need to immediate access is an important requirement along with cost. Glacier has a long recovery time at a low cost or a shorter recovery time at a high cost, and 1Zone-IA has a lower Availability level which means that it may not be available when needed.

*You work for a genetics company that has extremely large datasets stored in S3. You need to minimize storage costs without introducing unnecessary risk or delay. Mandated restore times depend on the age of the data. Data 30-59 days old must be available immediately without delay, and data more than 60 days old must be available within 12 hours. Which of the following options below should you consider?* * S3 - IA * CloudFront * S3 - RRS * S3 - OneZone-IA * Glacier

*S3 - IA* *Glacier* You should use S3 - IA for the data that needs to be accessed immediately, and you should use Glacier for the data that must be recovered within 12 hours. ----------------------------------- RRS and 1Zone-IA would not be suitable solution for irreplaceable data or data that required immediate access (each introduces reduced Durability or Availability), and CloudFront is a CDN service, not a storage solution.

*You have a website that allows users in third world countries to store their important documents safely and securely online. Internet connectivity in these countries is unreliable, so you implement multipart uploads to improve the success rate of uploading files. Although this approach works well, you notice that when an object is not uploaded successfully, incomplete parts of that object are still being stored in S3 and you are still being charged for those objects. What S3 feature can you implement to delete incomplete multipart uploads?* * S3 Lifecycle Policies. * Have S3 trigger DataPipeling Auto-delete. * S2 Reduced Redundancy Storage * Have CloudWatch trigger a Lambda function that deletes the S3 data.

*S3 Lifecycle Policies* You can create a lifecycle policy that expires incomplete multipart uploads, allowing you to save on costs by limiting the time non-completed multipart uploads are stored.

*What are the different storage classes that Amazon S3 offers?* (Choose all that apply.) * S3 Standard * S3 Global * S3 CloudFront * S3 US East * S3 IA

*S3 Standard* *S3 IA* S3 Global is a region and not a storage class. Amazon CloudFront is a CDN and not a storage class. US East is a region and not a storage class.

*You are developing a document management application, and the business has two key requirements. The application should be able to maintain multiple versions of the document, and one year the documents should be seamlessly archived. Which AWS service can serve this purpose?* * EFS * EBS * RDS * S3

*S3* S3 provides the versioning capability, and using a lifecycle policy, you can archive files to Glacier. ------------------------- If you use EFS or EBS, it is going to cost a lot, and you have to write code to get the versioning capability. You are still going to use Glacier for archiving. RDS is a relational database service and can't be used to store documents.

*Which of the below are storage services in AWS?* (Choose 2) * S3 * EFS * EC2 * VPC

*S3* *EFS* S3 and EFS both provide the ability to store files in the cloud. ------------------------- EC2 provides compute, and is often augmented with other storage services. VPC is networking service.

*You are designing a web page for event registration. Whenever a user signs up for an event, you need to be notified immediately. Which AWS service are you going to choose for this?* * Lambda * SNS * SQS * Elastic Beanstalk

*SNS* SNS allows you to send a text message. ------------------------- Lambda lets you run code without provisoning or managing servers, SQS is a managed message queuing service, and Elastic Beanstalk helps in deploying and managing web applications in the cloud.

*A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?* * SQS guarantees the order of the messages. * SQS synchronously provides transcoding output. * SQS checks the health of the worker instances. * SQS helps to facilitate horizontal scaling of encoding tasks.

*SQS helps to facilitate horizontal scaling of encoding tasks.* Even though SQS guarantees the order of messages for FIFO queues, the main reason for using it is because it helps to horizontal scaling of AWS resources and is used for decoupling systems. ----------------------------------- SQS can neither be used for transcoding output nor for checking the health of workers instances. The health of worker instances can be checked via ELB or CloudWatch.

*A company has a workflow that sends video files from their on-premises system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. As an architect you need to design how the SQS service would be used in this architecture. Which of the following is the ideal way in which the SQS service would be used?* * SQS Should be used to guarantee to the order of the messages. * SQS should be used to synchronously managed the transcoding output. * SQS should be used to check the health of the worker instances. * SQS should be used to facilitate horizontal scaling for encoding tasks.

*SQS should be used to facilitate horizontal scaling of encoding tasks.* Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. ----------------------------------- Option A is incorrect since there is no mention in the question for the order of the messages to be guaranteed. Option B and C is incorrect since these are not the responsibility of the SQS queue.

*Your company has just acquired a new company, and the number of users who are going to use the database will double. The database is running on Aurora. What things can you do to handle the additional users?* (Choose two.) * Scale up the database vertically by choosing a bigger box * Use a combination of Aurora and EC2 to host the database * Create a few read replicas to handle the additional read-only traffic * Create the Aurora instance across multiple regions with a multimaster mode

*Scale up the database vertically by choosing a bigger box* *Create a few read replicas to handle the additional read-only traffic* You can't host Aurora on a EC2 server. Multimaster is not supported in Aurora.

*In addition to choosing the correct EBS volume type for your specific task, what else can be done to increase the performance of your volume?* (Choose 3) * Schedule snapshots of HDD based volumes for periods of low use * Stripe volumes together in a RAID 0 configuration. * Never use HDD volumes, always ensure that SSDs are used * Ensure that your EC2 instances are types that can be optimised for use with EBS

*Schedule snapshots of HDD based volumes for periods of low use* *Stripe volumes together in a RAID 0 configuration.* *Ensure that your EC2 instances are types that can be optimised for use with EBS* There are a number of ways you can optimise performance above that of choosing the correct EBS type. One of the easiest options is to drive more I/O throughput than you can provision for a single EBS volume, by striping using RAID 0. You can join multiple gp2, io1, st1, or sc1 volumes together in a RAID 0 configuration to use the available bandwidth for these instances. You can also choose an EC2 instance type that supports EBS optimisation. This ensures that network traffic cannot contend with traffic between your instance and your EBS volumes. The final option is to manage your snapshot times, and this only applies to HDD based EBS volumes. When you create a snapshot of a Throughput Optimized HDD (st1) or Cold HDD (sc1) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress. This behaviour is specific to these volume types. Therefore you should ensure that scheduled snapshots are carried at times of low usage. The one option on the list which is entirely incorrect is the option that states "Never use HDD volumes, always ensure that SSDs are used" as the question first states "In addition to choosing the correct EBS volume type for your specific task". HDDs may well be suitable to certain tasks and therefore they shouldn't be discounted because they may not have the highest specification on paper.

*A client is concerned that someone other than approved administrators is trying to gain access to the Linux Web app instances in their VPC. She asks what sort of network access logging can be added. Which of the following might you recommend?* (Choose 2) * Set up a traffic logging rule on the VPC firewall appliance and direct the log to CloudWatch or S3. * Use Event Log filters to trigger alerts that are forwarded to CloudWatch. * Set up a Flow log for the group of instances and forward them to CloudWatch. * Make use of an OS level logging tools such as iptables and log events to CloudWatch or S3.

*Set up a Flow Log for the group of instances and forward them to CloudWatch.* *Make use of an OS level logging tools such as iptables and log events to CloudWatch or S3* Security and Auditing in AWS needs to be considered during the Design phase.

*You want to move all the files older than a month to S3 IA. What is the best way of doing this?* * Copy all the files using the S3 copy command * Set up a lifecycle rule to move all the files to S3 IA after a month * Download the files after a month and re-upload them to another S3 bucket with IA * Copy all the files to Amazon Glacier and from Amazon Glacier copy them to S3 IA

*Set up a lifecycle rule to move all the files to S3 IA after a month.* Copying all the files using the S3 copy command is going to be a painful activity if you have millions of objects. Doing this when you can do the same thing by automatically downloading and re-uploading the files does not make any sense and wastes a lot of bandwidth and manpower. Amazon Glacier is used mainly for archival storage. You should not copy anything into Amazon Glacier unless you want to archive the files.

*You have a client who is considering a move to AWS. In establishing a new account, what is the first thing the company should do?* * Set up an account using Cloud Search. * Set up an account using their company email address. * Set up an account via SQS (Simple Queue Service). * Set up an account via SNS (Simple Notification Service)

*Set up an account using their company email address.*

*A database hosted using AWS RDS service is getting a lot of database queries and has now become bottleneck for the associating application. What will ensure that the database is not a performance bottleneck?* * Setup a CloudFront distribution in front of the database. * Setup an ELB in front of the database. * Setup ElastiCache in front of the database. * Setup SNS in front of the database.

*Setup ElastiCache in front of the database.* ElastiCache is an in-memory solution which can be used in front of the database to cache the common queries issued against the database. This can reduce the overall load on the database. ----------------------------------- Option A is incorrect because this is normally used for content distribution. Option B is partially correct, but you need to have one more database as an internal load balancing solution. Option D is incorrect because SNS is a simple notification service.

*There is a requirement to host a database server. This server should not be able to connect to the Internet except while downloading required database patches. Which of the following solutions would best satisfy all the above requirements?* (Choose 2) * Setup the database in a private subnet with a security group which only allows outbound traffic. * Setup the database in a public subnet with a security group which only allows inbound traffic. * Setup the database in a local center and use a private gateway to connect to the application to the database. * Setup the database in a private subnet which connects to the Internet via a NAT Instance.

*Setup the database in a private subnet which connects to the Internet via a NAT Instance.* The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application, while maintaining back-end servers that aren't publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers.

*You have designed an application that uses AWS resources, such as S3, to operate and store users' documents. You currently use Cognito identity pools and User pools. To increase the usage and ease of signing up you decide adding social identity federation is the best path forward. When asked what the difference is between the Cognito identity pool and the federated identity providers (e.g. Google), how do you respond?* * They are the same and just called different things. * First you sign-in via Cognito then through a federated site, like Google. * Federated identity providers and identity pools are used to authorize services. * Sign-in via Cognito user pools and sign-in via federated identity providers are independent of one another.

*Sign-in via Cognito user pools and sign-in federated identity providers are independent of one another.* Sign-in through a third party (federation) is available in Amazon Cognito user pools. This feature is independent of federation through Amazon Cognito identity pools (federated identities) ----------------------------------- Option A is incorrect, as these are separate, independent authentication methods. Option B is incorrect, only one log-in event is needed, not two. Option C is incorrect, identity providers authenticate users, not authorize services.

*You have a small company that is only leveraging cloud resources like AWS Workspaces and AWS Workmail. You want a fully managed solution to provide user management and to set policies. Which AWS Directory Service would you recommend?* * AWS Managed Microsoft AD for its full-blown AD features and capabilities. * AD Connector for use with on-premises applications. * AWS Cognito for its scalability and customization. * Simple AD for limited functionality and compatibility with desired applications.

*Simple AD for limited functionality and compatibility with desired applications.* Simple AD is a Microsoft Active Directory-compatible directory from AWS Directory Service. You can use Simple AD as a standalone directory in the cloud to support Windows workloads that need basic AD features, compatible AWS applications, or to support Linux workloads that need LDAP services. ----------------------------------- Option A is incorrect, this is more functionality and feature-rich than you need give the desired applications. Option B is incorrect, you don't have on-premises applications, so AD Connector is not needed. Option C is incorrect, this is more functionality and feature-rich than you need given the desired applications.

*What does S3 stand for?* * Simplified Serial Sequence * Simple SQL Service * Simple Storage Service * Straight Storage Service

*Simple Storage Service*

*You are running your MySQL database using the Amazon RDS service. You have encrypted the database and are managing the keys using AWS Key Management System (KMS). You need to create an additional copy of the database in a different region. You have taken a snapshot of the existing database, but when you try to copy the snapshot to the other region, you are not able to copy it. What could be the reason?* * You need to copy the keys to the different region first. * You don't have access to the S3 bucket. * Since the database is encrypted and the keys are tied to a region, you can't copy. * You need to reset the keys before copying.

*Since the database is encrypted and the keys are tied to a region, you can't copy.* You cannot copy the snapshots of an encrypted database to another AWS region. KMS is a regional service, so you currently cannot copy things encrypted with KMS to another region. ------------------------- You can't copy the keys to a different region. This is not an S3 permission issue. Resetting the keys won't help since you are trying to copy the snapshot across different regions.

*Your company wants to use an S3 bucket for web hosting but have several different domains perform operations on the S3 content. In the CORS configuration, you want the following origin sites: http://mysite.com, https://secure.mysite.com, and https://yoursite.com. The site, https://yoursite.com, is not being allowed access to the S3 bucket. What is the most likely cause?* * Site https://yoursite.com was not correctly added as an origin site; instead included as http://yoursite.com * HTTPS must contain a specific port on the request, e.g. https://yoursite.com:443 * There's a limit of two origin sites per S3 bucket allowed. * Adding CORS automatically removes the S3 ACL and bucket policies.

*Site https://yoursite.com was not correctly added as an origin site; instead included as http://yoursite.com* The exact syntax must be matched. In some cases, wildcards can be used to help in origin URLs. ----------------------------------- Option B is incorrect, this is not required in allowing an origin domain be included; although it can be. Option C is incorrect, the limit is 100. Option D is incorrect, ACL and policies continue to apply when you enable CORS on the bucket. Verify that the origin header in your request matches at least one of the AllowedOrigin elements in the specified CORS Rule. For example, if you set CORS Rule to allow http://www.example.com, then both the https://www.example.com and http://www.example.com:80 origins in your request don't match the allowed origin in your configuration.

*How many copies of data does Amazon Aurora maintain in how many availability zones?* * Four copies of data across six AZs * Three copies of data across six AZs * Six copies of data across three AZs * Six copies of data across four AZs

*Six copies of data across three AZs* Amazon Aurora maintains six copies of data across three AZs ------------------------- It is neither four copies nor three copies; it is six copies of data in three AZs.

*The local route table in the VPC allows which of the following?* * So that all the instances running in different subnet within a VPC can communicate to each other * So that only the traffic to the Internet can be routed * So that multiple VPCs can talk with each other * So that an instance can use the local route and talk to the Internet

*So that all the instances running in different subnet within a VPC can communicate to each other* The traffic to the Internet is routed via the Internet gateway. Multiple VPCs can talk to each other via VPC peering.

*You have developed a new web application in the US-West-2 Region that requires six Amazon Elastic Compute Cloud (EC2) instances to be running at all times. US-West-2 comprises three Availability Zones (us-west-2a, us-west-2b, and us-west-2c). You need 100 percent fault tolerance: should any single Availability Zone in us-west-2 become unavailable, the application must continue to run. How would you make sure 6 servers are ALWAYS available?* NOTE: each answer has 2 possible deployment configurations.* (Choose 2) * Solution 1: us-west-2a with two EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances. Solution 2: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. * Solution 1: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. Solution 2: us-west-2a with four EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances. * Solution 1: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. * Solution 1: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with no EC2 instances. Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.

*Solution 1: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances.* *Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.* You need to work through each case to find which will provide you with the required number of running instances even if one AZ is lost. Hint: always assume that the AZ you lose is the one with the most instances. Remember that the client has stipulated that they MUST have 100% fault tolerance.

*You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?* * Reserved instances * Spot instances * Dedicated instances * On-demand instances

*Spot Instances* Since the above scenario is similar to batch processing jobs, the best instance type to use is a Spot Instance. Spot Instance are normally used in batch processing jobs. Since these jobs don't last for an entire year, they can be bid upon and and allocated and de-allocated as requested. Reserved Instances/Dedicated Instances cannot be used since this is not a 100% used application. There is no mention on continuous demand of work in the above scenario, hence there is no need to use On-Demand Instances. *Note:* 1) If you read the question once again, it has this point 'These instances will only be needed until the backlog is reduce', and also 'If this process is *interrupted*, the video gets transcoded by another instance based on the queueing system' So if your application or system is fault tolerant, Spot Instances can be used. 2) In the question, they mentioned that 'reduce this backlog by adding more instances', That means the application does not fully depend on the spot instances. These are used only for reducing the backlog load. 3) here we have to select the most cost-effective solution. Base on the first oint we conclude that the system is fault tolerant (Interruption acceptable)

*If you want your request to go to the same instance to get the benefits of caching the content, what technology can help provide that objective?* * Sticky session * Using multiple AZs * Cross-zone load balancing * Using one ELB per instance

*Sticky session* Using multiple AZs, you can distribute your load across multiple AZs, but you can't direct the request to go to same instance. Cross-zone load balancing is used to bypass caching. Using one ELB per instance is going to complicate things.

*You are hosting your application on your own data center. The application is hosted in a server that is connected to a SAN, which provides the block storage. You want to back up this data to AWS, and at the same time you also would like to retain your frequently accessed data locally. Which option should you choose that can help you to back up the data in the most resiliant way?* * Storage Gateway file gateway * Storage Gateway volume gateway in Cache mode * Storage Gateway volume gateway in stored mode * Back up the file directly to S3

*Storage Gateway volume gateway in Cache mode* Using Gateway volume gateway in cache mode, you can store your primary data in Amazon S3 and retain your frequently accessed data locally. ------------------------- Since you are using block storage via a SAN on-premise, a file gateway is not the right use case Using the AWS Storage Gateway volume gateway in stored mode will cache everything locally, which is going to increase the cost. You can back up the files directly to S3, but how are you going to access the frequent data locally?

*You've been tasked with the implementation of an offsite backup/DR solution. You'll only be responsible only for flat files and server backup. Which of the following would you include in your proposed solution?* * EC2 * Storage Gateway * Snowball * S3

*Storage Gateway* *Snowball* *S3* EC2 is a compute service not directly applicable to this providing backups. All others could be part of a comprehensive backup/DR solution.

*A company has a sales team and each member of this team uploads their sales figures daily. A Solutions Architect needs a durable storage solution for these documents and also a way to preserve documents from accidental deletions. What among the following choices would deliver protection against unintended user actions?* * Store data in an EBS Volume and create snapshots once a week. * Store data in an S3 bucket and enable versioning. * Store data in two S3 buckets in different AWS regions. * Store data on EC2 Instance storage.

*Store data in an S3 bucket and enable versioning.* Amazon S3 has an option for versioning as show below. Versioning is on the bucket level and can be used to recover prior versions of an object.

*A company needs to store images that are uploaded by users via a mobile application. There is also a need to ensure that a security measure is in place to avoid the data loss. What step should be taken for protection against unintended user actions?* * Store data in an EBS volume and create snapshots once a week. * Store data in an S3 bucket and enable versioning. * Store data on Amazon EFS storage. * Store data on EC2 instance storage.

*Store data in an S3 bucket and enable versioning.* Versioning is on the bucket level and can be used to recover prior versions of an object. ----------------------------------- Option A is invalid as it does not offer protection against accidental deletion of files. Option C is invalid as This is not idea one because multiple EC2 instances can access the file system. Option D is ephemeral.

*A company is planning on storing their files from their on-premises location onto the Simple Storage service. After a period of 3 months, they want to archive the files, since they would be rarely used. Which of the following would be the right way to service this requirement?* * Use an EC2 instance with EBS volumes. After a period of 3 months, keep on taking snapshots of the data. * Store the data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier. * Store the data on Amazon Glacier and then use Lifecycle policies to transfer the data to Amazon S3. * Use an EC2 instance with EBS volumes. After a period of 3 months, keep on taking copies of the volume using Cold HDD volume type.

*Store the data on S3 and then use Lifecycle policies to transfer the data to Amazon Glacier* To manage your objects so that they are stored cost effectively throughput their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: *Transition action* - Define when objects transition to another storage class. *Expiration action* - Define when objects expire. Amazon S3 deletes expired objects on your behalf. ----------------------------------- Option A and D are incorrect since using EBS volumes is not the right storage option for this sort of requirement. Option C is incorrect since the files should be initially stored in S3.

*Your application is uploading several thousands of files every day. The file sizes range from 500MB to 2GB. Each file is then processed to extract metadata, and the metadata processing takes a few seconds. There is no fixed uploading frequency. Sometimes a lot of files are uploaded in particular hour, sometimes only a few files are uploaded in particular hour, and sometimes there is no upload happening over a period of a few hours. What is the most cost-effective way to handle this issue?* * Use an SQS queue to store the file and then use an EC2 instance to extract the metadata. * Use EFS to store the file and then use multiple EC2 instances to extract the metadata. * Store the file in Amazon S3 and then use S3 event notification to invoke an AWS Lambda function for extracting the metadata. * Use Amazon Kinesis to store the file and then use AWS Lambda to extract the metadata.

*Store the file in Amazon S3 and then use S3 event notification to invoke an AWS Lambda function for extracting the metadata* SInce you are looking for cost savings, the most cost effective way would be to upload the files in S3 and then invoke an AWS Lambda function for metadata extraction. ------------------------- They will all result in a high cost compared to option C. Technically you can use all these options to solve the problem, but it won't serve the key criterion of cost optimization that you are looking for.

*An application allows a manufacturing site to upload files. Each uploaded 3 GB file is processed to extract metadata, and this process takes a few seconds per file. The frequency at which the uploads happen is unpredictable. For instance, there may be no updates for hours, followed by several files being uploaded concurrently. What architecture addresses this workload in the most cost-efficient manner?* * Use a Kinesis Data Delivery Stream to store the file. Use Lambda for processing. * Use an SQS queue to store the file, to be accessed by a fleet of EC2 Instances. * Store the file in an EBS volume, which can be then accessed by another EC2 Instance for processing. * Store the file in an S3 bucket. Use Amazon S3 event notification to invoke a Lambda function for file processing.

*Store the file in an S3 bucket. Use Amazon S3 event notification to invoke a Lambda function for file processing.* You can create a Lambda function with the code to process the file. You can then use an Even Notification from the S3 bucket to invoke the Lambda function whenever a file is uploaded. ----------------------------------- Option A is incorrect. Kinesis is used to collect, process and analyze real-time data. The frequency of updates are quite unpredictable. By default, SQS uses short polling. In this case it will lead to the cost factor going up since we are getting messages in an unpredictable manner that many a time will be returning empty responses. Hence option B is not a solution.

*A company wants to store their documents in AWS. Initially, these documents will be used frequently, and after a duration of 6 months, they will need to be archived. How would you architect this requirement?* * Store the files in Amazon EBS and create a Lifecycle Policy to remove the files after 6 months. * Store the files in Amazon S3 and create a Lifecycle Policy to archive the fiels after 6 months. * Store the files in Amazon Glacier and create a Lifecycle Policy to remove the files after 6 months. * Store the files in Amazon EFS and create a Lifecycle Policy to remove the files after 6 months.

*Store the files in Amazon S3 and create a Lifecycle Policy to archive the files after 6 months.* Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. *Tranistion actions* - In which you define when objects transition to another storage class. For example, you may choose to transition objects to STANDARD_IA (Infrequent Access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration actions* - In which you specify when the objects expire. Amazon S3 deletes the expired objects on your behalf.

*You are designing a media-streaming application, and you need to store hundreds of thousands of videos. Each video will have multiple files associated with it for storing the different resolutions (480p, 720p, 1080p, 4K, and so on). The videos need to be stored in durable storage.* * Store the main video in S3 and the different resolution files in Glacier. * Store the main video in EBS and the different resolution files in S3. * Store the main video in EFS and the different resolution files in S3. * Store the main video in S3 and the different resolution files in S3-IA.

*Store the main video in S3 and the different resolution files in S3-IA* S3 provides 99.9999999 percent of durability, so that is the best choice. ------------------------- If you store the files in EBS or EFS, the cost is going to be very high. You can't store these files in Glacier since it is an archival solution.

*A Solutions Architect is designing a highly scalable system to track records. These records must remain available for immediate download for up to three months and then be deleted. What is the most appropriate decision for this use case?* * Store the file in Amazon EBS and create a Lifecycle Policy to remove files after 3 months. * Store the file in Amazon S3 and create a Lifecycle Policy to remove files after 3 months. * Store the files in Amazon Glacier and create a Lifecycle Policy to remove files after 3 months. * Store the files in Amazon EFS and create a Lifecycle Policy to remove files after 3 months.

*Store thefiles in Amazon S3 and create a Lifecycle Policy to remove files after 3 months.* Lifecycle configuration enables you to specify the Lifecycle Management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: *Transition actions* - In which you define when the object transition to another storage class. For example, you may choose to transition objects to STANDARD_IA (IA, for infrequent access) storeage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. *Expiration action* - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. ----------------------------------- Option A is invalid since the records need to be stored in a highly scalable system. Option C is invalid since the records must be available for immediate download. Option D is invalid because it does not have the concept of a Lifecycle Policy.

*What is the main purpose of Amazon Glacier?* (Choose all that apply.) * Storing hot, frequently used data * Storing archival data * Storing historical or infrequently accessed data * Storing the static content of a web site * Creating a cross-region replication bucket for Amazon S3

*Storing archival data* *Storing historical or infrequently accessed data* Hot and frequently used data needs to be stored in Amazon S3; you can also use Amazon CloudFront to cache the frequently used data. ----------------------------------- Amazon Glacier is used to store the archive copies of the data or historical data or infrequent data. You can make lifecycle rules to move all the infrequently accessed data to Amazon Glacier. The static content of the web site can be stored in Amazon CloudFront in conjunction with Amazon S3. You can't use Amazon Glacier for a cross-region replication bucket of Amazon S3; however, you can use S3 IA or S3 RRS in addition to S3 Standard as a replication bucket for CRR.

*To help you manage your Amazon EC2 instances, you can assign your own metadata in the form of ________.* * Certificates * Notes * Wildcards * Tags

*Tags* Tagging is a key part of managing an environment. Even in a lab, it is easy to lose track of the purpose of a resources, and tricky determine why it was created and if it is still needed. This can rapidly translate into lost time and lost money.

*You want to create a copy of the existing Redshift cluster in a different region. What is the fastest way of doing this?* *Export all the data to S3, enabled cross-regional replication in S3, and load the data to a new Redshift cluster from S3 in a different region. * Take a snapshot of Redshift in a different region by enabling cross-region snapshots and create a cluster using that. * Use the database Migration Service. * Encrypt the Redshift cluster.

*Take a snapshot of Redshift in a different region by enabling cross-region snapshots and create a cluster using that.* Redshift allows you to configure a cross-regional snapshot via which you can clone the existing Redshift cluster to a different region. ------------------------- If you export all the data to S3, move the data to a different region, and then again load the data to Redshift in a different region, the whole process will take a lot of time, and you are looking for the fastest way. You can't use the Database Migration Service for doing this. If you encrypt a Redshift cluster, it is going to provide encryption but has nothing to do with cloning to a different region.

*A company has a set of EC2 Instances that store critical data on EBS Volumes. There is a fear from IT Supervisors that if data on the EBS Volumes is lost, then it could result in a lot of effort to recover the data from other sources. Which of the following would help alleviate this concern in an economical way?* * Take regular EBS Snapshots. * Enable EBS Volume Encryption. * Create a script to copy data to an EC2 Instance Store. * Mirror data across 2 EBS Volumes.

*Take regular EBS Snapshots.* You can back up the data on your Amazon EBS Volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was take) to a new EBS volume. ----------------------------------- Option B is incorrect because it does not help in durability of EBS Volumes. Option C is incorrect since EC2 Instance stores are not durable. Option D is incorrect since mirroring data across EBS Volumes in inefficient when you already have an option for EBS Snapshots.

*You application is hosted on EC2 instances, and all the data is stored in an EBS volume. The EBS volumes must be durably backed up across multiple AZs. What is the most resiliant way to back up the EBS volumes?* * Encrypt the EBS volume. * Take regular EBS snapshots. * Mirror data across two EBS volumes by using RAID. * Write an AWS Lambda function to copy all the data from EBS to S3 regularly.

*Take regular EBS snapshots* By using snapshots, you can back up the EBS volume, and you can create the snapshot in a different AZ. ------------------------- A is incorrect because encrypting the volume is different from backing up the volume. Even if you encrypt the volume, yo ustil lneed to take a backup of it. C is incorrect because even if you mirror the data across two EBS volumes by using RAID, you will have high availability of the data but not a backup. Remember, the backup has to be across AZs. If you use RAID and provide high availability to your EBS volumes, that will still be under the same AZ since EBS volumes can't be mounted across AZs. D is incorrect because although you can take the backup of all the data in S3 from an EBS volume, that is not backing up the EBS volume. Backing up the volume means if your primary volume goes bad or goes down, you should be able to quickly mount the volume from a backup. If you have a snapshot of the EBS volume, you can quickly mount it and have all the data in it. If you back up the data to S3, you need to create a new volume and then copy all the data from S3 to the EBS volume.

*Your application is hosted on EC2 instances, and all the data is stored in an EBS volume. The EBS volumes must be durably backed up across multiple AZs. What is the most resilient way to back up the EBS volumes?* * Encrypt the EBS volume. * Take regular EBS snapshots. * Mirror data across two EBS volumes by using RAID. * Write an AWS Lambda function to copy all the data from EBS to S3 regularly.

*Take regular EBS snapshots.* By using snapshots, you can back up the EBS volume, and you can create the snapshot in a different AZ. ------------------------- A is incorrect because incrypting the volume is different from backing up the volume. Even if you encrypt the volume, you still need to take a backup of it. C is incorrect because even if you mirror the data across two EBS volumes by using RAID, you will have high availability of the data but not a backup. Remember the backup has to be across AZs. If you use RAID and provide high availability to your EBS volumes, that will still be under the same AZ since EBS volumes can't be mounted across AZs. D is incorrect because although you can take the backup of all the data to S3 from an EBS volume, that is not backing up the EBS volume. Backing up the volume means if your primary volume goes bad or goes down, you should be able to quickly mount the volume from a backup. If you ahve a snapshot of the EBS volume, you can quickly mount it and have all the data in it. If you back up the data to S3, you need to create a new volume and then copy all the data from S3 to the EBS volume.

*An application currently stores all its data on Amazon EBS Volumes. All EBS Volumes must be backed up durably across multiple Availability Zones. What is the MOST resilient and cost-effective way to back up volumes?* * Take regular EBS snapshots. * Enable EBS volume encryption. * Create a script to copy data to an EC2 Instance store. * Mirror data scross 2 EBS volumes.

*Take regular EBS snapshots.* You can back up the data on the Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental beackups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume. ----------------------------------- Option B is incorrect, because it does not help the durability of EBS Volumes. Option C is incorrect, since EC2 Instance stores are not durable. Option D is incorrect, since mirroring data across EBS volumes is inefficent in comparison with existing option for EBS snapshots.

*You are running your critical application on EC2 servers and using EBS volumes to store the data. The data is important to you, and you need to make sure you can recover the data if something happens to the EBS volume. What should you do to make sure you are able to recover the data all the time?* * Write a script that can copy the data to S3. * Take regular snapshots for the EBS volume. * Install a Kinesis agent on an EC2 server that can back up all the data to a different volume. * Use an EBS volume with PIOPS.

*Take regular snapshots for the EBS volume.* By taking regular snapshots, you can back up the EBS volumes. The snapshots are stored in S3, and they are always incremental, which means only the changed blocks will be saved in the next snapshot. ------------------------- You can write a script to copy all the data to S3, but when you already have the solution of snapshots available, then why reinvent the wheel? Using a Kinesis agent for backing up the EBS volume is not the right use case for Kinesis. It is used for ingesting, streaming, or batching data. An EBS volume with PIOPS is going to provide great performance, but the question asks for a solution for backing up.

*What happens to the EIP address when you stop and start an instance?* * The EIP is released to the pool and you need to re-attach. * The EIP is released temporarily during the stop and start. * The EIP remains associated with the instance. * The EIP is available for any other customer.

*The EIP remains associated with the instance.* Even during the stop and start of the instance, the EIP is associated with the instance. It gets detached when you explicitly terminate an instance.

*What happens when the Elastic Load Balancing fails the health check?* (Choose the best answer.) * The Elastic Load Balancing fails over to a different load balancer. * The Elastic Load Balancing keeps on trying until the instance comes back online. * The Elastic Load Balancing cuts off the traffic to that instance and starts a new instance. * The load balancer starts a bigger instance.

*The Elastic Load Balancing cuts off the traffic to that instance and starts a new instance.* When Elastic Load Balancing fails over, it is an internal mechanism that is transparent to end users. Elastic Load Balancing keeps on trying, but if the instance does not come back online, it starts a new instance. It does not wait indefinitely for that instance to come back online. The load balancer starts the new instance, which is defined in the launch configuration. It is going to start the same type of instance unless you have manually changed the launch configuration to start a bigger type of instance.

*An Instance is launched into a VPC subnet with the network ACL configured to allow all outbound traffic and deny all inbound traffic. The instance's security group is configured to allow SSH from any IP address. What changes need to be made to allow SH access to the instance?* * The Outbound Security Group needs to be modified to allow outbound traffic. * The Inbound Network ACL needs to be modified to allow inbound traffic. * Nothing, it can be accessed from any IP address using SSH. * Both the Outbound Secuirty Group and Outbound Network ACL need to be modified to allow outbound traffic.

*The Inbound Network ACL needs to be modified to allow inbound traffic.* The reason why Network ACL has to have both an Allow for Inbound and Outbound is because network ACLs are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups, response are stateful. So, if an incoming request is granted, by default an outgoing request will also be granted. ----------------------------------- Option A and D are invalid because Security Groups are stateful. Here, any traffic allowed in the Inbound rule is allowed in the Outbound rule too. Option C is incorrect.

*An application consists of the following architecture: a. EC2 Instances in multiple AZ's behind an ELB b. The EC2 Instances are launched via an Auto Scaling Group. c. There is a NAT instance which is used so that instances can download updates from the Internet. Which of the following is a bottleneck in the architecture?* * The EC2 Instances * The ELB * The NAT Instance * The Auto Scaling Group

*The NAT Instance* Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple Availability Zones and make it part of the Auto Scaling group.

*An application consists of EC2 Instances placed in different Availability Zones. The EC2 Instances sit behind an application load balancer. The EC2 Instances are managed via an Auto Scaling Group. There is a NAT Instance which is used for the EC2 Instances to download updates from the Internet. Which of the following is a bottleneck in the architecture?* * The EC2 Instances * The ELB * The NAT Instance * The Auto Scaling Group

*The NAT Instance* Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple Availability Zones and make it a part of an Auto Scaling Group.

*Your company has been hosting a static website in an S3 bucket for several months and gets a fair amount of traffic. Now you want to registered .com domain to serve content from the bucket. Your domain is reached via https://www.myfavoritedomain.com. However, any traffic requested through https://www.myfavoritedomain.com is not getting through. What is the most likely cause of this disruption?* * The new domain is not registered in CloudWatch monitoring. * The S3 bucket has not been configured to allow Cross Origin Resource Sharing (CORS) * The S3 bucket was not created in the correct region. * Https://www.myfavoritedomain.com wasn't registered with AWS Route 53 and therefore won't work.

*The S3 bucket has not been configured to allow Cross Origin Resource Sharing (CORS)* In order to keep your content safe, your web browser implements something called the same origin policy. The default policy ensures that scripts and other active content loaded from one site or domain cannot interfere or interact with content from another location without an explicit indication that this is desired behavior. ----------------------------------- Option A is incorrect, Enabling Cloudwatch doesn't affect Cross Origin Resource Sharing (CORS) Option C is incorrect, S3 bucket aren not region-specific. Option D is incorrect, the domain can be registered with any online registrar, not just AWS Route 53.

*Which of the below are factors that have helped make public cloud so powerful?* (Choose 2) * Traditional methods that are used for on-premise infrastructure work just as well in cloud * No special skills required * The ability to try out new ideas and experiment without upfront commitment * Not having to deal with the collateral damage of failed experiments

*The ability to try out new ideas and experiment without upfront commitment.* *Not having to deal with the collateral damage of failed experiments.* Public cloud allows organisations to try out new ideas, new approaches and experiment with little upfront commitment. ----------------------------------- If it doesn't work out, organisations have the ability to terminate the resources and stop paying for them

*Your data warehousing company has a number of different RDS instances. You have a medium size instance with automated backups switched on and a retention period of 1 week. Once of your staff carelessly deletes this database. Which of the following apply.* (Choose 2) * The automated backups will be retained for 2 weeks and then deleted after the 2 weeks has expired. * The automatic backups are deleted when the instance is deleted. * A final snapshot will be created upon deleted automatically. * A final snapshot MAY have been created when the instance was deleted, depending on whether the 'SkipFinalSnapshot' parameter was set to 'False'.

*The automatic backups are deleted when the instance is deleted.* *A final snapshot MAY have been created when the instance was deleted, depending on whether the 'SkipFinalSnapshot' parameter was set to 'False'.* Under normal circumstances, all automatic backups of an RDS instance are deleted upon termination. However, it is possible to can create a final DB Snapshot upon deletion. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date. Amazon RDS retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted.

*A company has a lot of data hosted on their On-premises infrastructure. Running out of storage space, the company wants a quick win solution using AWS. Which of the following would allow easy extension of their data infrastructure to AWS?* * The company could start using Gateway Cached Volumes. * The company could start using Gateway Stored Volumes. * The company could start using the Simple Storage Service. * The company could start using Amazon Glacier.

*The company could start using Gateway Cached Volumes.* Volume Gateway and Cached Volumes can be used to start storing data in S3. You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. *Note:* The question states that they are running out of storage space and they need a solution to store data with AWS rather than a backup. So for this purpose, a gateway-cached volumes are appropriate which will help them to avoid scaling their on-premises data center and allows them to store on AWS storage service while having the most recent files available for them at low latency. This is the different between Cached and Stored volumes: *Cached volumes* - You store your data in S3 and retain a copy of frequently accessed data subsets locally. Cached volumes offer substantial cost savings on primary storage and 'minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.' *Stored volumes* - If you need low-latency access to your entire data set, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. "This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2." For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. As described in the answer: The company wants a quick solution to store data with aws avoiding scaling the on-premise setup rather than backup the data. In the question, they mentioned that *A company has a lot of data hosted on their On-premises infrastructure.* From On-premises to Cloud infrastructure, you can use AWS storage gateways. Option C is talking about the data store. But here the requirement is (How) to transfer or migrate your data from on-premises to Cloud infrastructure. So there is no clear process mentioned in Option C.

*Amazon S3 provides 99.999999999 percent durability. Which of the following are true statements?* (Choose all that apply.) * The data is mirrored across multiple AZs within a region. * The data is mirrored across multiple regions to provide the durability SLA. * The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities. * The data is regularly backed up to AWS Snowball to provide the durability SLA. * The data is automatically mirrored to Amazon Glacier to achieve high availability.

*The data is mirrored across multiple AZs within a region.* *The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities.* If you have created an S3 bucket in a global region, it will always stay there unless you manually move the data to a different region. Amazon does not back up data residing in S3 to anywhere else since the data is automatically mirrored across multiple facilities. However, customers can replicate the data to a different region for additional safety. AWS Snowball is used to migrate on-premises data to S3. Amazon Glacier is the archival storage of S3, and an automatic mirror of regular Amazon S3 data does not make sense. However, you can write lifecycle rules to move historical data from Amazon S3 to Amazon Glacier.

*What are the characteristics of AMI that are backed up by the instance store?* (Choose two.) * The data persists even after the instance reboot. * The data is lost when the instance is shut down. * The data persists when the instance is shut down. * The data persists when the instance is terminated.

*The data persists even after the instance reboot.* *The data is lost when the instance is shut down.* If an AMI is backed up by an instance store, you lose all the data if the instance is shut down or terminated. However, the data persists if the instance is rebooted.

*You currently manage a set web servers with public IP addresses. These IP addresses are mapped to domain names. There was an urgent maintenance activity that had to be carried out on the servers and the server had to be stopped and restarted. Now the web application hosted on these EC2 Instances are not accessible via the domain names configured earlier. Which of the following could be a reason for this?* * The Route 53 hosted zone needs to be restarted. * The network interfaces need to initialized again. * The public IP addresses need to associated to the ENI again. * The public addresses have changed after the instance was stopped and started.

*The public IP addresses have changed after the instance was stopped and started.* By default, the public IP address of an EC2 Instance is released after the instance is stopped and started. Hence, the earlier IP address which was mapped to the domain names would have become invalid now.

*You are using CloudWatch to generate the metrics for your application. You have enabled a one-minute metric and are using the CloudWatch logs to monitor the metrics. You see some slowness in performance in the last two weeks and realize that you made some application changes three weeks back. You want to look at the data to see how the CPU utilization was three weeks back, but when you try looking at the logs, you are not able to see any data. What could be the reason for that?* * You have accidentally deleted all the logs. * The retention for CloudWatch logs is two weeks. * The retention for CloudWatch logs is 15 days. * You don't have access to the CloudWatch logs anymore.

*The retention for CloudWatch logs is 15 days.* For the one-minute data point, the retention is 15 days; thus, you are able to see any logs older than that. ----------------------------------- Since there is no data, you are not able the see the metrics. This is not an access issue because three weeks back you made some application changes, not AWS infrastructure changes.

*The data across the EBS volume is mirrored across which of the following?* * Multiple AZs * Multiple regions * The same AZ * EFS volumes mounted to EC2 instances

*The same AZ* Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations in the same AZ. Amazon EBS replication is stored within the same availability zone, not across multiple zones.

*Using the shared security model, the customer is responsible for which of the following?* (Choose two.) * The security of the data running inside the database hosted in EC2 * Maintaining the physical security of the data center * Making sure the hypervisor is patched correctly * Making sure the operating system is patched correctly

*The security of the data running inside the database hosted in EC2* *Making sure the operating system is patched correctly* The customer is responsible for the security of anything running on the hypervisor, and therefore the operating system and the security of data are the customer's responsibility.

*To set up a cross-region replication, what statements are true?* (Choose all that apply.) * The source and target bucket should be in a same region. * The source and target bucket should be in different region. * You must choose different storage classes across different regions. * You need to enable versioning and must have an IAM policy in place to replicate. * You must have at least ten files in a bucket.

*The source and target bucket should be in different region.* *You need to enable versioning and must have an IAM policy in place to replicate.* Cross-region replication can't be used to replicate the objects in the same region. However, you can use the S3 copy command or copy the files from the console to move the objects from one bucket to another in the same region. You can choose a different class of storage for CRR; however, this option is not mandatory, and you can use the same class of storage as the source bucket as well. There is no minimum number of file restriction to enable cross-region replication. You can even use CRR when there is only one file in an Amazon S3 bucket.

*Which of the following statements are true for Amazon Aurora?* (Choose three.) * The storage is replicated at three different AZs. * The data is copied at six different places. * It uses a quorum-based system for reads and writes. * Aurora supports all the commercial databases.

*The storage is replicated at three different AZs.* *The data is copied at six different places.* *It uses a quorum-based system for reads and writes.* Amazon Aurora supports only MySQL and PostgreSQL. It does not support commercial databases.

*You need to take a snapshot of the EBS volume. How long will the EBS remain unavailable?* * The volume will be available immediately. * EBS magnetic drive will take more time than SSD volumes. * It depends on the size of the EBS volume. * It depends on the actual data stored in the EBS volume.

*The volume will be available immediately.* The volumes are available irrespective of the time it takes to take the snapshot.

*How many EC2 instances can you have in an Auto Scaling group?* * 10. * 20. * 100. * There is no limit to the number of EC2 instances you can have in the Auto Scaling group

*There is no limit to the number of EC2 instances you can have in the Auto Scaling group* There is no limit to the number of EC2 instances you can have in the Auto Scaling group. However, there might an EC2 limitation in your account that can be increased by logging a support ticket.

*Your app uses AWS Cognito Identity for authentication and stores user profiles in a User Pool. To expand the availability and ease of signing in to the app, your team is requesting advice on allowing the use of OpenID Connect (OIDC) identity providers as additional means of authenticating users and saving the user profile information. What is your recommendation on OIDC identity providers?* * This is supported, along with social and SAML based identity providers. * This is not supported, only social identity providers can be integrated into User Pools. * If you want OIDC identity providers, then you must include SAML and social based supports as well. * It's too much effort to add non-Cognito authenticated user information to a User Pool.

*This is supported, along with social and SAML based identity providers.* OpenID Connect (OIDC) identity providers (IdPs) (like Salesforce or Ping Identity) are supported in Cognito, along with social and SAML based identity providers. You can add OIDC IdP to your pool in the AWS Management Console, with the AWS CLI, or by using the user pool API method CreateIdentityProvider. ----------------------------------- Option B is incorrect, Cognito supports more than just social identity providers, including OIDC, SAML, and its own identity pools. Option C is incorrect, You can add any combination of federated types, you don't have to add them all. Option D is incorrect, while there is additional coding to develop this, the effort is most likely not too great to add the feature.

*Which of the following provide the lowest cost EBS options?* (Choose 2) * Throughput Optimized (st1) * Provisioned IOPS (io1) * General Purpose (gp2) * Cold (sc1)

*Throughput Optimized (st1), Cold (sc1)* Of all the EBS types, both current and of the previous generation, HDD based volumes will always be less expensive than SSD types. Therefore, of the options available in the question, the Cold (sc1) and Throughout Optimized (st1) types are HDD based and will be the lowest cost options.

*True or False: There is a limit to the number of domain names that you can manage using Route 53.* *True and False. With Route 53, there is a default limit of 50 domain names. However, this limit can be increased by contacting AWS support. * False. By default, you can support as many domain names on Route 53 as you want. * True. There is a hard limit of 10 domain names. You cannot go above this number.

*True and False. With Route 53, there is a default limit of 50 domain names. However, this limit can be increased by contacting AWS support.*

*I can change the permissions to a role, even if that role is already assigned to an existing EC2 instance, and these changes will take effect immediately.* * False * True

*True*

*I can use the AWS Console to add a role to an EC2 instance after that instance has been created and powered-up.* * False * True

*True*

*RDS Reserved instances are available for multi-AZ deployments.* * False * True

*True*

*Using SAML (Security Assertion Markup Language 2.0), you can give your federated users single sign-on (SSO) access to the AWS Management Console.* * True * False

*True*

*You can add multiple volumes to an EC2 instance and then create your own RAID 5/RAID 10/RAID 0 configurations using those volumes.* * True * False

*True*

*Placement Groups can be created across 2 or more Availability Zones.* * True * False

*True* Technically they are called Spread placement groups. Now you can have placement groups across different hardware and multiple AZs.

*You are running an application in the us-east 1 region. The application needs six EC2 instances running at any given point in time. With five availability zones available in that region (us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e), which of the following deployment models is going to provide fault tolerance and a cost-optimized architecture if one of the AZs goes down?* * Two EC2 instances in the us-east-1a, two EC2 instance in the us-east-1b, and two EC2 instances in us-east-1c. * Two EC2 instances in the us-east-1a, two EC2 instances in the us-east-1b, two EC2 instances in the us-east-1c, two EC2 instances in us-east-1d, and two EC2 instances in us-east-1e * Three EC2 instances in us-east-1a, three EC2 instances in us-east-1b, and three EC2 instances in us-east-1C * Two EC2 instances in us-east-1a, two EC2 instances in us-east-1b, two EC2 instances in us-east-1c, and two EC2 instances in us-east-1d * Six EC2 instances in us-east-1a and six EC2 instances in us-east-1b

*Two EC2 instances in us-east-1a, two EC2 instances in us-east-1b, two EC2 instances in us-east-1c, and two EC2 instances in us-east-1d* This is more of a mathematical question. As per this question, you should always be up and running on six EC2 server even if you lose one AZ. With D you will be running only eight servers at any point in time. Even if you lost an AZ, you will still be running with six EC2 instances. ------------------------- A is incorrect because at any point in time you will be running with six EC instances, but if one of the AZ goes down, you will be running with only four EC2 instances. B is incorrect because at any time you will be running a total of ten EC2 servers since you are using five AZs and two EC2 servers in each AZ. If you lose an AZ, you will be running with eight EC2 servers. Though this meets the business requirement that you are running six EC2 servers even if an AZ goes down, this is not a cost-optimized solution. C is incorrect because at any point in time you will be running nine EC2 instances, and if an AZ goes down, you will be able to run six EC2 servers. This meets the business objective of running six EC2 servers at any time, but lets evaluate another option to see whether we can find a better cost optimized solution. With D, you will be running a total of eight EC2 servers at any point in time, and if an AZ fails, you will be running six EC2 servers. So far, with option D, the total number of servers always running is eight compared to nine, which is the most cost-optimized solution. With option E, you will be running 12 instances at any point in time, which increases the cost.

*You have created a VPC in Paris region, and one public subnet in each Availability Zone eu-west-3a, eu-west-3b, and eu-west-3c of the same region, and each subnet having one ECX2 instance inside it. Now you want to launch ELB nodes in two EZ's out of three available. How many private IP addresses will be used by ELB nodes at initial launch of ELB.* * Three nodes in each AZ will consume three private IP addresses * Two nodes in two AZ's will consume two private IP addresses * Two nodes in each AZ's and one for the ELB service, hence total three IP addresses will be consumed. * The ELB service picks the private IP addresses only when the traffic flows through the Elastic Load Balancer.

*Two nodes in two AZ's will consume two private IP addresses.* Whenever we launch the ELB, the ELB service will create a node in each subnet. ----------------------------------- Option A is incorrect because problem statement is we would like to launch the ELB nodes in just two subnets out of three. So the third subnet need not, have the ELB node inside it, and hence no IP address will be consumed. Option C is incorrect, whenever we launch an ELB, the ELB service won't consume an IP address, it's the ELB node which consumes IP address. Option D is incorrect, as the IP addresses are assigned to the nodes at the initial launch of the ELB service.

*You have created a Redshift cluster without encryption, and now your security team wants you to encrypt all the data. How do you achieve this?* * It is simple. Just change the settings of the cluster and make it encrypted. All the data will be encrypted. * Encrypt the application that is loading data in Redshift; that way, all the data will already be encrypted. * Run the command encryption cluster from a SQL prompt on Redshift client. * Unload the data from the existing cluster and reload it to a new cluster with encryption on.

*Unload the data from the existing cluster and reload it to a new cluster with encryption on* If you launch a cluster without encryption, the data remains unencrypted during the life of the cluster. If at a later time you decide to encrypt the data, then the only way is to unload your data from the existing cluster and reload it in a new cluster with the encryption setting. ------------------------- You cannot change a RedShift cluster on the fly to make it encrypted from non encrypted. Applications can encrypt the application data, but what about the data from other sources or old data in the cluster? There is no such command.

*A company has a set web servers. It is required to ensure that all the logs from these web servers can be analyzed in real time for any sort of threat detection. Which of the following would assists in this regard?* *Upload all the logs to the SQS Service and then use EC2 Instances to scan the logs. * Upload the logs to Amazon Kinesis and then analyze the logs accordingly. * Upload the logs to CloudTrail and then analyze the logs accordingly. * Upload the logs to Glacier and then analyze the logs accordingly.

*Upload the logs to Amazon Kinesis and then analyze the logs accordingly.* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of you application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.

*Your company has a requirement to host a static web site in AWS. Which of the following steps would help implement a quick and cost-effective solution for this requirement?* (Choose 2) * Upload the static content to an S3 bucket. * Create EC2 Instance and install a web server. * Enable web site hosting for the S3 bucket. * Upload the code to the web server on the EC2 Instnace.

*Upload the static content to an S3 bucket.* *Enable web site hosting for the S3 bucket.* S3 would be an ideal, cost-effective solution for the above requirement. You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

*Your company has a set of EC2 Instances hosted in AWS. There is a mandate to prepare for disasters and come up with the necessary disaster recovery procedures. Which of the following would help in mitigating the effects of a disaster for the EC2 Instances?* * Place an ELB in front of the EC2 Instances. * Use AutoScaling to ensure the minimum number of instances are always running. * Use CloudFront in front of the EC2 Instances. * use AMIs to recreate the EC2 Instances in another region.

*Use AMIs to recreate the EC2 Instances in another region.* You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be created form the AMI. ----------------------------------- Option A and B are good for fault tolerance, but cannot help completely in disaster recoery for EC2 Instances. Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is hosted on the EC2 Intance. For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, option A, B, and C are not feasible solutions.

*Which product should you choose if you want to have a solution for versioning your APIs without having the pain of managing the infrastructure?* * Install a version control system on EC2 servers * Use Elastic Beanstalk * Use API Gateway * Use Kinesis Data Firehose

*Use API Gateway* EC2 servers and Elastic Beanstalk both need you to manage some infrastructure; Kinesis Data Firehose is used for ingesting data.

*A company stores its log data in S3 bucket. There is a current need to have search capabilities available for the data in S3. how can this be achieved in an efficient and ongoing manner?* (Choose 2) * Use AWS Athena to query the S3 bucket. * Create a Lifecycle Policy for the S3 bucket. * Load the data into the Amazon ElasticSearch. * Load the data into Glacier.

*Use AWS Athena to query the S3 bucket.* *Load the data into Amazon ElasticSearch.* Amazon Athena is a service that enables a data analyst to perform interactive queries in the AWS public cloud on data stored in AWS S3. Since it's a serverless query service, an analyst doesn't need to manage any underlying compute infrastructure to use it.

*Your company has a set of resources hosted on the AWS Cloud. As part of a new governing model, there is a requirement that all activity on AWS resources should be monitored. What is the most efficient way to have this implemented?* * Use VPC Flow Logs to monitor all activity in your VPC. * Use AWS Trusted Advisor to monitor all of your AWS resources. * Use AWS Inspector to inspect all of the resources in your account. * Use AWS CloudTrail to monitor all API activity.

*Use AWS CloudTrail to monitor all API activity.* AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. The event history simplifies security analysis, resource change tracking, and troubleshooting. Visibility into yoru AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of trails you create, and control how users view CloudTrail events.

*A company's requirement is to have a Stack-based model for its resource in AWS. There is a need to have different stacks for the Development and Production environments. Which of the following can be used to fulfill this required methodology?* * Use EC2 tags to define different stack layers for your resources. * Define the metadata for the different layers in DynamoDB. * Use AWS OpsWorks to define the different layers for your application. * Use AWS Config to define the different layers for your application.

*Use AWS Config to define the different layers for your application.* AWS OpsWorks Stack lets you manage applications and servers on AWS and on-premises. With OpsWorks Stack, you can model your application as a stack containing layers, such as load balancing, and database, and application server. You can deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases. A stack is basically a collection of instances that are managed together for serving a common task. Consider a sample stack whose purpose is to serve web applications, it will comprised of the following instances. - A set of application server instances, each of which handles a portion of the incoming traffic. - A load balancer instance, which takes incoming traffic and distributes it across the application servers. - A database instance, which serves as a back-end data store for the application servers. A common practice is to have multiple stacks that represent different environments; typical set of stacks consists of: - A development stack to be used by the developers to add features, fix bugs, and perform other development and maintenance tasks. - A staging stack to verify updates or fixes before exposing them publicly. - A production stack, which is the public-facing version that handles incoming requests from users.

*You want to connect the applications running on your on-premise data center to the AWS cloud. What is the secured way of connecting them to AWS?* (Choose two.) * Use AWS Direct Connect * VPN * Use an elastic IP * Connect to AWS via the Internet

*Use AWS Direct Connect* *VPN* By using Direct Connect or VPN, you can securly connect to AWS from your data center. ------------------------- An elastic IP address provides a static IP address that has nothing to do with connecting the on-premise data center with AWS. Connecting via the Internet won't be secure.

*A company is using a Redshift cluster to store their data warehouse. There is a requirement from the Internal IT Security team to encrypt the data for the Redshift database. How can this be achieved?* *Encrypt the EBS volumes of the underlying EC2 Instances. * Use AWS KMS Customer Default master key. * Use SSL/TLS for encrypting the data. * Use S3 encryption.

*Use AWS KMS Customer Default master key.* Amazon Redshift usages a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or hardware security module (HSM) to manage the top-level encryption key in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

*A company is planning testing a large set of IoT enabled devices. These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose?* * Use AWS EMR to store and process the streams. * Use AWS Kinesis to process and analyze the data. * Use AWS SQS to store the data. * Use SNS to store the data.

*Use AWS Kinesis to process and analyze the data.* Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights an react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Option B: Amazon Kinesis can be used to store, process and analyze real-time streaming data. ----------------------------------- Option A: Amazon EMR can be used to process applications with data-intensive workloads. Option C: SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Option D: SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and client.,

*A company with a set of Admin jobs (.NET core) currently setup in the C# programming language, is moving their infrastructure to AWS. Which of the following would be efficient means of hosting the Admin related jobs in AWS?* * Use AWS DynamoDB to store the jobs and then run them on demand. * Use AWS Lambda functions with C# for the Admin jobs. * Use AWS S3 to store the jobs and then run them on demand. * Use AWS Config functions with C# for the Admin jobs.

*Use AWS Lambda functions with C# for the Admin jobs.* The best and most efficient option is to host the jobs using AWS Lambda. This service has the facility to have the code run in the C# programming language. AWS Lambda is a compute service that lets you run code without provisioining or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requrests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration.

*You are a developer and would like to run your code without provisioning or managing servers. Which service should you chose to do so while making sure performs well operationally?* * Launch an EC2 server and run the code from there * Use API Gateway * Use AWS S3 * Use AWS Lambda

*Use AWS Lambda* Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. There is no charge when your code is not running. ------------------------- If you run the code on an EC2 server, you still need to provision an EC2 server that has the overhead of managing the server. You can't run code from API Gateway. AWS S3 is an object store that can be used for running code.

*You are running an application on Amazon EC2 instances, and that application needs access to other AWS resources. You don't want to store any long-term credentials on the instance. What service should you use to provide the short-term security credentials to interact with AWS resources?* * Use an IAM policy. * Use AWS Config * Use AWS Security Token Service (STS). * AWS CloudTrail.

*Use AWS Security Token Service (STS)* AWS Security Token Service (AWS STS) is used to create and provide trusted users with temporary security credentials that can control access to your AWS resources. ----------------------------------- An IAM policy can't provide temporary security credentials, AWS Config is used to monitor any change in the AWS resource, and AWS CloudTrail is used to monitor the trail on API calls.

*You have been tasked with moving petabytes of data to the AWS cloud. What is the most efficient way of doing this?* * Upload them to Amazon S3 * Use AWS Snowball * Use AWS Server Migration Service * Use AWS Database Migration Service

*Use AWS Snowball* ----------------------------------- You can also upload data to Amazon S3, but if you have petabytes of data and want to upload it to Amazon S3, it is going to take a lot of time. The quickest way would be to leverage AWS Snowball. AWS Server Migration Service is an agentless service that helps coordinate, automate, schedule, and track large-scale server migrations, whereas AWS Database Migration Service is used to migrate the data of the relational database or data warehouse.

*A customer needs coporate IT governance and cost oversight of all AWS resources consumed by its divisions. Each division has their own AWS account and there is a need to ensure that the security policies are kept in place at the Account level. How can you achieve this?* (Choose 2) * Use AWS Organizations * Club all divisions under a single account instead * Use IAM Policies to segregate access * Use Service control policies

*Use AWS organizations* *Use Service control policies* With AWS Organizations, you can centrally managed policies across multiple AWS accounts without having to use custom scripts and manual processes. For example, you can apply service control policies (SCPs) across multiple AWS accounts that are members of an organizations. SCPs allow you to define which AWS service APIs can and cannot be executed by AWS Identity and Access Management (IAM) entities (such as IAM users and roles) in your organization's member AWS account. SCPs are created and applied from the master account, which is the AWS account that you used when you created your organization. ----------------------------------- Option B is incorrect since the question mentions that you need to use separate AWS accounts. Option C is incorrect since you need to use service control policies "AWS IAM doesn't provide the facility to define access permissions to that minute level i.e.. which AWS service APIs can and cannot be executed by IAM entities.

*You are creating an application, and the application is going to store all the transitional data in a relational database system. You are looking for a highly available solution for a relational database for your application. Which option should you consider for this?* * Use Redshift * Use DynamoDB * Host the database on EC2 servers. * Use Amazon Aurora.

*Use Amazon Aurora* Amazon Aurora is the solution for relational databases that comes with built-in high availability. ------------------------- Redshift is the data-warehouse offering of AWS. DynamoDB is a NoSQL database, which is not a relational database. You can host the database in EC2, but you will have the overhead of managing the same whereas Aurora takes away all the management overhead from you.

*You are planning to run a mission-critical online order-processing system on AWS, and to run that application, you need a database. The database must be highly available, with high performing, and you can't lose any data. Which database meets these criteria?* * Use an Oracle database hosted in EC2. * Use Amazon Aurora. * Use Redshift. * Use RDS MySQL.

*Use Amazon Aurora* Amazon Aurora stores six copies of the data across three AZs. It provides five times more performance than RDS MySQL. ----------------------------------- If you host an Oracle database on EC2 servers, it will be much more expensive compared to Amazon Aurora, and you need to manage it manually. Amazon Redshift is solution for a data warehouse.

*You are working as an AWS Architect for a retail company using AWS EC2 instance for a web application. Company is using Provisioned IOPS SSD EBS volumes to store all product database. This is a critical database & you need to ensure that appropriate backups are accomplished every 12 hours. Also, you need to ensure that storage space is optimally used for storing all these snapshots removing all older files. Which of the following can help to meet this requirement with least management overhead?* *Manually create snapshots & delete old snapshots for EBS volues as this is a critical data. * Use Amazon CloudWatch events to initiate AWS Lambda which will create snapshot of EBS volumes along with deletion of old snapshots. * Use Amazon Data Lifecycle manager to schedule EBS snapshots and delete old snapshots as per retention policy. * Use Third party tool to create snapshots of EBS volumes along with deletion of old snapshots.

*Use Amazon Data Lifecycle Manager to schedule EBS snapshots and delete old snapshots as per retetion policy.* Amazon Data Lifecycle Manager can be used for creation, retention & deletion of EBS snapshots. It protects critical data by initiating backups of Amazon EBS volumes at selected intervals along with storing & deletion of old snapshots to save storage space & cost. ----------------------------------- Option A is incorrect as this will result in additional admin work & there can be risk of losing critical data due to manual errors. Option B is incorrect as for this we will need to additional config changes in CloudWatch & AWS Lambda. Option D is incorrect as this will result in additional cost to maintain a third-party software.

*As a Solutions Architect for a multinational organization having more than 150,000 employees, management has decided to implement a real time analysis for their employees time spent in offices across the globe. You are tasked to design a architecture which will receive the inputs from 10,000+ sensors with swipe machine sending in an out data from across the globe, each sending 20KB data every 5 seconds in JSOn format. The application will process and analyze the data and upload the results to dashboards in real time. Other application requirements will have, ability to apply real time analytics on the captured data, processing of captured data will be parallel and durable, the application must be scalable as per the requirement as the load varies and new sensors are added or removed at various facilities. The analytic processing results are stored in a persistent data storage for data mining. What combination of AWS services would be used for the above scenario?* * Use EMR to copy the data coming from Swipe machines into DynamoDB and make it available for analytics. * Use Amazon Kinesis Streams to ingest the Swipe data coming from sensors, Custom Kinesis Streams Applications will analyse the data, move analytics outcomes to RedShifts using AWS EMR * Utilize SQS to receive the data coming from sensors, use Kinesis Firehose to analyse the data from SQS, then save the results to a Multi-AZ RDS instance. * Use Amazon Kinesis Streams to ingest the sensor's data, custom Kinesis Streams applications will analyse the data, move analytics outcomes to RDS using AWS EMR.

*Use Amazon Kinesis Streams to ingest the Swipe data coming from sensors, Custom Kinesis Streams Applications will analyse the data, move analytics outcomes to RedShift using AWS EMR.* Option B is correct, as the Amazon Kinesis streams are used to read the data from thousands of sources like social media, survey based data .etc, and the kinesis streams can be used to analyse the data and can feed it using AWS EMR, to analytics based database like RedShift which works on OLAP. ----------------------------------- Option A is incorrect, EMR is not for receiving the real time data from thousands of sources, EMR is mainly used for Hadoop ecosystem based data used for Big data analysis. Option C is incorrect, SQS cannot be used to read the real time data from thousands of sources. Besides the Kinesis Firehose is used to ship the data to other AWS service not for the analysis. And finally RDS is again an OLTP based database. Option D is incorrect, as the AWS EMR can read large amounts of data, however RDS is a transactional database works based on the OLTP, thus cannot store the analytical data.

*A company plans to have their application hosted in AWS. This application has users uploading files and then public URL for downloading them at a later stage. Which of the following designs would help fulfill this requirement?* * Have EBS Volumes hosted on EC2 Instances to store the files. * Use Amazon S3 to host the files. * Use Amazon Glacier to host the files since this would e the cheapest storage option. * Use EBS Snapshots attached to EC2 Instances to store the files.

*Use Amazon S3 to host the files.* If you need storage for the Internet, AWS Simple Storage Service is the best option. Each uploaded file automatically gets a public URL, which can be used to download the file at a later point in time. ----------------------------------- Option A and D are incorrect because EBS Volumes or Snapshots do not have Public URL. Option C is incorrect because Glacier is mainly used for data archiving purposes.

*A company has an application hosted in AWS. This application consists of EC2 Instances which sit behind an ELB. The following are requirements from an administrative perspective: a) Ensure notifications are sent when the read requests go beyond 1000 requests per minute. b) Ensure notifications are sent when the latency goes beyond 10 seconds. c) Any API activity which calls for sensitive data should be monitored. Which of the following can be used to satisfy these requirements?* (Choose 2) * Use CloudTrail to monitor the API Activity. * Use CloudWatch logs to monitor the API Activity. * Use CloudWatch metrics for the metrics that need to be monitored as per the requirement and set up an alarm activity to send out notifications when the metric reaches the set threshold limit. * Use custom log software to monitor the latency and read requests to the ELB.

*Use CloudTrail to monitor the APi Activity.* *Use CloudWatch metrics for the metrics that need to be monitored as per the requirement and set up an alarm activity to send out notifications when the metric reaches the set threshold limit.* When you use CloudWatch metrics for an ELB, you can get the amount of read requests and latency out of the box. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the AWS API call the source IP address, the request parameters, and the response elements returned by the service. Use CloudWatch metrics for the metrics that needs to be monitored as per the requirement and set up an alarm activity to send out notifications when the metrics reaches the set threshold limit.

*Your company currently has a set of EC2 Instances hosted in AWS. The states of these instances need to be monitored and each state change needs to be recorded. Which of the following can help fulfill this requirement?* (Choose 2) * Use CloudWatch logs to store the state change of the instances. * Use CloudWatch Events to monitor the state change of the events. * Use SQS trigger a record to be added to a DynamoDB table. * Use AWS Lambda to store a change record in a DynamoDB table.

*Use CloudWatch logs to store the state change of the instances.* * Use CloudWatch Events to monitor the state change of the events.* Using CloudWatch events metrics we can monitor the changes in state for EC2 instances as given in the link. Using CloudWatch logs the changes in state for EC2 instances can be recorded as given in the link. ----------------------------------- Option C is incorrect as SQS cannot be used for monitoring. Option D is incorrect as AWS Lambda cannot be used for monitoring.

*A company is planning to run a number of Admin scripts using the AWS Lambda service. There is a need to detect errors that occur while the scripts run. How can this be accomplished in the most effective manner?* * Use CloudWatch metrics and logs to watch for errors. * Use CloudTrail to monitor for errors. * Use the AWS Config service to monitor for errors. * Use the AWS Inspector service to monitor for errors.

*Use CloudWatch metrics and logs to watch for errors.* AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.

*You are working as an AWS Administrator for a global IT company. Developer Team has developed a new intranet application for project delivery using AWS EC2 instance in us-west-2. Coding for this application is doen using Pyton with code size less than 5 MB & changes in code is done quarterly basis. This new application will be setup on new redundant infrastructure & company would like to automate this process. For deploying new features, AWS Codepipeline will be used for automated release cycle. Which of the following will you recommend as source stage & deploy stage integration along with AWS CodePipeline?* * Use CodePipeline with source stage as CodeCommit & deploy stage using AWS CodeDeploy * Use CodePipeline with source stage as CodeCommit & deploy stage using AWS Elastic Beanstalk. * Use CodePipeline with source stage as S3 bucket having versioning enable & deploy stage using AWS Elastic Beanstalk * Use CodePipeline with source stage as S3 bucket having versioning enable & deploy stage using AWS Code Deploy

*Use CodePipeline with source stage as CodeCommit & deploy stage using AWS Elastic Beanstalk.* As code size is less than 5 mb with a smaller number of changes to code, AWS CodeCommit can be used as source stage integration with AWS CodePipeline. Also, a new infrastructure needs to be built for this new application deployment. AWS Elastic Beanstalk can be used to build & manage redundant resources. ----------------------------------- Option A is incorrect, as there are no existing infrastructure & a new resource needs to be deployed. AWS Code Deploy is not a correct option. Option C is incorrect as Code size is less than 5MB with a smaller number of changes. S3 would not be a correct option. Option D is incorrect, as there are no existing infrastructure & a new resource needs to be deployed. AWS Code Deploy is not a correct option. Also, code size is less than 5 MB with a smaller number of changes. S3 would not be a correct option.

*You are a startup company that is releasing its first iteration of its app. Your company doesn't have a directory service for its intended users but wants the users to be able to sign in and use the app. What is your advice to your leadership to implement a solution quickly?* * Use AWS Cognito although it only supports social identity proviers like Facebook. * Let each user create an AWS user account to be managed via IAM. * Invest heavily in Microsoft Active Directory as it's the industry standard. * Use Cognito Identity along with a User Pool to securely save user's profile attributes.

*Use Cognito Identity along with a User Pool to securely save user's profile attributes.* ----------------------------------- Option A is incorrect, Cognito supports more than just social identity providers, including OIDC, SAML, and its own identity pools. Option B is isn't an efficient means of managing user authentication. Option C isn't the most efficient means to authenticate and save user information.

*A company has setup an application in AWS that interacts with DYnamoDB. It is required that when an item is modified in a DynamoDB table, an immediate entry is made to the associating application. How can this a accomplished?* (Choose 2) * Setup CloudWatch to monitor the DynamoDB table for changes. Then trigger a Lambda function to send the changes to the application. * Setup CloudWatch logs to monitor the DynamoDB table for changes. Then trigger AWS SQS to send the changes to the application. * Use DynamoDB streams to monitor the changes to the DynamoDB table. * Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified.

*Use DynamoDB streams to monitor the changes to the DynamoDB table.* *Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified.* When you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement is to have an immediate entry made to an application in case an item in the DynamoDB table is modified, a lambda function is also required. Example: Consider a mobile gaming app that writes to a GameScores table. Whenever the top score of the GameScores table is updated, a corresponding stream record is written to the table's stream. This event could then triggers Lambda function that posts a Congratulatory message on a Social media network handle. DynamoDB streams can be used to monitor the changes to a DynamoDB table. A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. *Note:* DynamoDB is integrated with Lambda so that you can triggers to events in DynamoDB streams. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement states that an item modified in a DynamoDB table causes an immediate entry to an associating application, a lambda function is also required.

*You want to host a static website on AWS. As a solutions architect, you have been given a task to establish a serverless architecture for that. Which of the following could be include in the proposed architecture?* (Choose 2) * Use DynamoDB to store data in tables. * Use EC2 to host the data on EBS Volume. * Use the simple Storage to store data. * use AWS RDS to store the data.

*Use DynamoDB to store data in tables.* *Use the Simple Storage Service to store data.* Both the Simple Storage Service and DynamoDB are complete serverless offering from AWS which you don't need to maintain servers, and your application have automated high availability.

*A company has a set of EBS Volumes that need to be catered in case of a disaster. How will you achieve this using existing AWS service effectively?* * Creaet a script to copy the EBS Volume to another Availability Zones. * Create a script to copy the EBS Volume to another region. * Use EBS Snapshots to create the volumes in another region. * Use EBS Snapshots to create the volumes in another Availability Zone.

*Use EBS Snapshots to create the volumes in another region.* A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create a new volumes in the same region. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. *Note:* It's not possible to provide each and every step in the Option and moreover in AWS exam. Also, you will see these kinds of Option. Option C is not talking about the whole procedure, it's simply giving the idea that we can use snapshots to create the volumes in the other region. Tha's the reason we also provided the explanation part to understand the concept. Catered means - provisioning. ----------------------------------- Option A and B are incorrect, because you can't directly copy EBS Volumes. Option D is incorrect because disaster recovery always looks at ensuring resources are created in another region.

*A company is planning to use Docker containers and necessary container orchestration tools for their batch processing requirements. There is a requirement for batch processing for both critical and non-critical data. Which of the following is the best implementation step for this requirement, to ensure that cost is effectively managed?* *Use Kubernetes for container orchestration and Reserved instances for all underlying instances. * Use ECS orchestration and Reserved Instances for all underlying instances. * Use Docker for container orchestration and a combination of Spot and Reserved Instances for the underlying instances. * Use ECS for container orchestration and a combination of Spot and Reserved Instances for the underlying instances.

*Use ECS for container orchestration and a combination of Spot and Reserved Instances for the underlying instances.* The Elastic Container service from AWS can be used for container orchestration. Since there are both critical and non-critical loads, one can use Spot instances for the non-critical workloads for ensuring cost is kept at a minimum.

*You are working for a financial institution using AWS cloud infrastructure. All project related data is uploaded to Amazon EFS. This data is retrieved from on-premises data centre connecting to VPC via AWS Direct Connect. You need to ensure that all client access to EFS is encrypted using TLS 1.2 to adhere to latest security guidelines issued by the security team. Which of the following is cost effective recommended practices for securing data in transit while accessing data from Amazon EFS?* * Use EFS mount helper to encrypt data in transit. * Use stunnel to connect to Amazon EFS & encrypt traffic in transit. * Use third-party tool to encrypt data in transit. * Use NFS client to encrypt data in transit.

*Use EFS mount helper to encrypt data in transit* While mounting Amazon EFS, if encryption of data in transit is enabled, EFS Mount helper initialise client Stunnel process to encrypt data in transit. EFS Mount helper uses TLS 1.2 to encrypt data in transit. ----------------------------------- * Option B is incorrect as using stunnel for encryption of data in transit will work fine, but there would be additional admin work to download & install stunnel for each mount. * Option C is incorrect as using third-party tool will be costly option. * Option D is incorrect as NFS client can't be used to encrypt data in transit. The amazon-efs-utils package can be used which consist of EFS mount helper.

*An application team needs to quickly provision a development environment consisting of a web and database layer. Which of the following would be the quickest and most ideal way to get this setup in place?* * Create a Spot Instances and install the Web and database components. * Create Reserved Instances and install the web and database components. * Use AWS Lambda to create the web components and AWS RDS for the database layer. * Use Elastic Beanstalk to quickly provision the environment.

*Use Elastic Beanstalk to quickly provision the environment.* With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management coplexity without restricting choice or control. You simply unload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. It does support RDS. AWS Elastic Beanstalk provides connection information to your instances by setting environment properties fro the database hostname, username, password, table name, and port. When you add a database to your environment, its lifecycle is tied to your environment's. ----------------------------------- Option A is incorrect, Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts compared to On-Demand prices. Option B is incorrect, a Reserved Instances is a resolution of resources and capacity, for either on or three years, for a particular Availability Zone within a region. Option C is incorrect, AWS Lambda is a compute service that makes it easy for you to build application that respond quickly to new information and not provisioning a new environment.

*You have a homegrown application that you want to deploy in the cloud. The application runs on multiple EC2 instances. During the batch job process, the EC2 servers create an output file that is often used as an input file by a different EC2 server. This output file needs to be stored in a shared file system so that all the EC2 servers can access it at the same time. How do you achieve this so that your performance is not impacted?* * Create an EBS volume and mount it across all the EC2 instances. * Use Amazon S3 for storing the output files. * Use S3-infrequent access for storing the output files. * Use Elastic File System for storing the output files.

*Use Elastic File System for storing the output files.* EFS provides the shared file system access. ------------------------- An EBS volume can't be mounted in more than one EC2 instance. S3 is an object store and can't be used for this purpose.

*When managing permissions for the API Gateway, what can be used to ensure that the right level of permissions are given to Developers, IT Admins and users? These permissions should be easily managed.* * Use the secure token service to manage the permissions for different users. * Use IAM Policies to create different policies for different types of users. * Use AWS Config tool to manage the permissions for different users. * Use IAM Access Keys to create sets of keys for different types of users.

*Use IAM Policies to create different policies for different types of users.* You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes. - To create, deploy, and manage API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. - To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.

*An EC2 Instance host a Java based application that accesses a DynamoDB table. This EC2 Instance is currently serving production users. Which of the following is a secure way for the EC2 Instance to access the DynamoDB table?* * Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use KMS Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use IAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance. * Use IAM Access Groups with the right permission to interact with DynamoDB and assign it to the EC2 Instance.

*Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance.* To ensure secure access to AWS resources from EC2 Instances, always assign a role to the role to the EC2 Instance. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user. You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. *Note:* You can attach IAM role to the existing EC2 instance.

*The application you are going to deploy on the EC2 servers is going to call APIs. How do you securely pass the credentials to your application?* * Use IAM roles for the EC2 instance. * Keep the API credentials in S3. * Keep the API credentials in DynamoDB. * Embed the API credentials in your application JAR files.

*Use IAM roles for the EC2 instance.* IAM roles allow you to securely pass the credentials from one service to the other. ------------------------- Even if you use S3 or DyanmoDB for storing the API credentials, you still need to connect EC2 with S3 or DynamoDB. How do you pass the credentials between EC2 and S3 or DyanmoDB? Embedding the API credentials in a JAR file is not secure.

*A customer wants to create EBS Volumes in AWS. The data on the volume is required to be encrypted at rest. How can this be achieved?* * Create an SSL Certificate and attach it to the EBS Volume. * Use KMS to generate encryption keys which can be used to encrypt the volume. * Use CloudFront in front of the EBS Volume to encrypt all requests. * Use EBS Snapshots to encrypt the requests.

*Use KMS to generate encryption keys which can be used to encrypt the volume.* When you create . avolume, you have the option to encrypt the volume using keys generated by the Key Management Service. ----------------------------------- Option A is incorrect since SSL helps to encrypt data in transit. Option C is incorrect because it also does not help in encrypting the data at rest. OPtion D is incorrect because the snapshot of an unencrypted volume is also unencrypted.

*You are creating a data lake, and one of your criteria you are looking for is faster performance. You are looking for an ability to transform the data directly during the ingestion process to save time. Which AWS service should you choose for this?* * Use Kinesis Analytics to transform the data. * Ingest the data in S3 and then load it in Redshift to transform it. * Ingest the data in S3 and then use EMR to transform it. * Use Kinesis Firehose.

*Use Kinesis Analytics to transform the data* Kinesis Analytics has the ability to transform the data during ingestion. ------------------------- Loading the data in S3 and then using EMR to transform it is going to take a lot of time. You are looking for faster performance. Redshift is the data warehouse solution. Kinesis Firehose can ingest the data, but does not have any ability to transform the data.

*You are creating a data lake, and one of the criteria you are looking for is faster performance. You are looking for an ability to transform the data directly during the ingestion process to save time. Which AWS service should you choose for this?* * Use Kinesis Analytics to transform the data. * Ingest the data in S3 and then load it in Redshift to transform it. * Ingest the data in S3 and then use EMR to transform it. * Use Kinesis Firehose.

*Use Kinesis Analytics to transform the data.* Kinesis Analytics has the ability to transform the data during ingestion. ------------------------- Loading the data in S3 and then using EMR to transform it is going to take a lot of time. You are looking for faster performance. Redshift is the data warehouse solution. Kinesis Firehose can ingest the data, but it does not have any ability to transform the data.

*You want to transform the data while it is coming in. What is the easiest way of doing this?* * Use Kinesis Data Analytics * Spin off an EMR cluster while the data is coming in * Install Hadoop on EC2 servers to do the processing * Transform the data in S3

*Use Kinesis Data Analytics* Using EC2 servers or Amazon EMR, you can transform the data, but that is not the easiest way to do it. S3 is just the data store; it does not have any transformation capabilities.

*A company has an infrastructure that consists of machines which keep sending log information every 5 minutes. The number of these machines can run into thousands and it is required to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement?* * Use Kinesis Data Streams with S3 to take the logs and store them in S3 to further processing. * Launch an Elastic Beanstalk application to take the processing job of the logs. * Launch an EC2 instance for enough EBS volumes to consume the logs which can be used for further processing. * Use CloudTrail to store all the logs which can be analyzed at a later stage.

*Use Kinesis Data Streams with S3 to take the logs and store them in S3 for further processing* Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon ElasticSearch, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you're already using today.

*You have the requirement to ingest the data in real time. What product should you choose?* * Upload the data directly to S3 * Use S3 IA * Use S3 reduced redundancy * Use Kinesis Data Streams

*Use Kinesis Data Streams* You can use S3 for storing the data, but if the requirement is to ingest the data in real time, S3 is not the right solution.

*Your company has started hosting their data on AWS by using the Simple Storage Service. They are storing files which are downloaded by users on a frequent basis. After a duration of 3 months, the files need to be transferred to archive storage since they are not used beyond this point. Which of the following could be used to effectively manage this requirement?* * Transfer the file via scripts from S3 Glacier after a period of 3 months * Use Lifecycle policies to transfer the files onto Glacier after a period of 3 months * Use Lifecycle policies to transfer the files onto Cold HDD after a period of 3 months * Create a snapshot of the files in S3 after a period of 3 months

*Use Lifecycle policies to transfer the files onto Glacier after a period of 3 months.* To manage your objects so they are stored cost effectively throughout their lifecycle, configure their lifecycle. A *lifecycle configuration* is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: *Transition actions* - Define when objects transition to another storage class. *Expiration actions* - Define when objects expire. ----------------------------------- Option A is invalid since there is already the option of a lifecycle policies Option C is invalid since lifecycle policies are used to transfer to Glacier or S3-Infrequent Access Option D is invalid since snapshots are used for EBS volumes.

*A database is being hosted using the AWS RDS service. The database is to be made into a production database and is required to have high availability. Which of the following can be used to achieve this requirement?* * Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another region. * Use the Read Replica feature to create another instance of the DB in another region. * Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another Availability Zone. * Use the Read Replica feature to create another instance of the DB in another Availability Zone.

*Use Multi-AZ for the RD instance to ensure that a secondary database is created in another Availability Zone.* Amazon RDS Multi-AZ deployment provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). ----------------------------------- Option A is incorrect because the Multi-AZ feature allows for high availability across Availability Zones and not regions. Options B and D are incorrect because Read Replicas can be used to offload database reads. But if you want high availability then opt for the Multi-AZ feature.

*You are working as an AWS consultant for a banking institute. They have deployed a digital wallet platform for clients using multiple EC2 instances in us-east-1 region. Application establishes a secure encrypted connection between client & EC2 instance for each transaction using custom TCP port 5810.* *Due to increasing popularity of this digital wallet, they are observing load on backend servers resulting a delay in transaction. For security purpose, all client IP address accessing this application should be preserved & logged. Technical team of banking institute is looking for a solution which will address this delay & also proposed solution should be compatible with millions of transactions done simultaneously. Which of the following is a recommended option to meet this requirement?* * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with custom security policy consisting of protocols & ciphers. *Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & ciphers. * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & TCP port 5810. * Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with custom security policy consisting of protocols & TCP port 5810.

*Use Network Load Balancers with SSL certificate. Configure TLS Listeners on this NLB with default security policy consisting of protocols & ciphers.* Network Load Balancer can be used to terminate TLS connections instead of back end instance reducing load on this instance. With Network Load Balancers, millions of simultaneous sessions can be established with no impact on latency along with preserving client IP address. To Negotiate TLS connections with clients, NLB uses a security policy which consists of protocols & ciphers. ----------------------------------- Option A is incorrect as Network Load Balancers does not support custom security policy. Option C is incorrect as Network Load Balancers should consists of security policies comprising of Protocols & Ciphers. Option D is incorrect as Network Load Balancers does not support custom security policy as well as security policies should comprise of protocols & ciphers.

*It is expected that only certain specified customers can upload images to the S3 bucket for a certain period of time. As an Architect what is your suggestion?* * Create a secondary S3 bucket. Then, use an AWS Lambda to sync the contents to primary bucket. * Use Pre-Signed URLs instead to upload the images. * Use ECS Containers to upload the images. * Upload the images to SQS and then push them to S3 bucket.

*Use Pre-Signed URLs instead to upload the images.* This question is basically based on the scenario where we can use pre-signed url. You need to understand about pre-signed url - which contains the user login credentials particular resources, such as S3 in this scenario. And user must have permission enabled that other application can use the credentials to upload the data (images) in S3 buckets. *AWS definition* A pre-signed URL gives you access to the object identified in theURL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permission to upload the object. All objects and buckets by default are private. The pre-signed URLs are useful if you want user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions. When you create a pre-signed URL, you must provide your security credentials and then specify a bucket name, an object key, and HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration. ----------------------------------- Option A is incorrect, since Amazon has provided us with an inbuilt function for this requirement, using this option is cost expensive and time-consuming. As a Solution Architect, you are supposed to pick the best and cost-effective solution. Option C is incorrect, ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Option D is incorrect, SQS is a message queue service used by distributed applications to exchange messages through a polling model and not through a push mechanism.

*I am running an Oracle database that is very I/O intense. My database administrator needs a minimum of 3,600 IOPS. If my system is not able to meet that number, my application won't perform optimally. How can I make sure my application always performs optimally?* * Use Elastic File System since it automatically handles the performance * Use Provisioned IOPS SSD to meet the IOPS number * Use your database files in an SSD-based EBS volume and your other files in an HDD-based EBS volume * Use a general-purpose SSD under a terabyte that has a burst capability

*Use Provisioned IOPS SSD to meet the IOPS number* If your workload needs a certain number of workload, then the best way would be is to use a Provisioned IOPS. That way, you can ensure the application or the workload always meets the performance metric you are looking for.

*You are designing an architecture on AWS with disaster recovery in mind. Currently the architecture consists of an ELB and underling EC2 Instances in the primary and secondary region. How can you establish a switchover in case of failure in the primary region?* * Use Route 53 Health Checks and then do a failover. * Use CloudWatch metrics to detect the failure and then do a failover. * Use scripts to scan CloudWatch logs to detect the failure and then do a failover.

*Use Route 53 Health Checks and then do a failover.* If you have multple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Route 53 can route traffic to other web server.

*You have a set of on-premises virtual machines used to serve a web-based application. You need to ensure that a virtual machine if unhealthy is taken out of the rotation. Which of the following option can be used for health checking and DNS failover features for a web application running behind ELB, to increase redundancy and availability.* * Use Route 53 health checks to monitor the endpoints. * Move the solution to AWS and use a Classic Load Balancer. * Move the solution to AWS and use an Application Load Balancer. * Move the solution to AWS and use a Network Load Balancer.

*Use Route 53 health checks to monitor the endpoints.* Route 53 health checks can be used for any endpoint that can be accessed via the Internet. Hence this would be ideal for monitoring endpoints. You can configure a health check that monitors an endpoint that you specify either by IP address or by the domain name. At regular intervals that you specify, Route 53 submits automated request over the internet to your application, server, or other resources to verify that it's reachable, available and functional. *Note:* Once enabled, Route 53 automatically configures and manages health checks for individual ELB nodes. Route 53 also takes advantage of the EC2 instance health checking that ELB performs. By combining the results of health checks of your EC2 instances and your ELB's, Route 53 DNS Failover is able to evaluate the health of the load balancer and the health of the application running on the EC2 instances behind it. In other words, if any part of the stack goes down, Route 53 detects the failure and routes traffic away from the failed endpoint. AWS documentation states, that you can create a Route 53 resource record that points to an address outside AWS, and you can fail over to any endpoint that you choose, regardless of location. For example, you may have a legacy application running in a datacenter outside AWS and a backup instance of that application running within AWS. You can set up health checks of your legacy application running outside AWS, and if the application fails the health checks, you can fail over automatically to the backup instance in AWS. *Note:* Route 53 has health checks in locations around the world. When you create a health check that monitors an endpoint, health checkers start to send requests to the endpoint that you specify to determine whether the endpoint is healthy. You can choose which locations you want Route 53 to use, and you can specify the interval between checks: every 10 seconds or every 30 seconds. Note that Route 53 health checkers in different data centers don't coordinate with one another, so you'll sometimes see several requests per second regardless of the interval you chose, followed by a few seconds with no health checks at all. Each health checker evaluates the health of the endpoint based on two values: - Response time Whether the endpoint responds to a number of consecutive health checks that you specify (the failure threshold) Route 53 aggregates the data from the health checkers and determines whether the endpoint is healthy: - If more than 18% of health checkers report that an endpoint is healthy, Route 53 considers it healthy. - If 18% of health checkers or fewer report that an endpoint is healthy, Route 53 considers it unhealthy. The response time that an individual health checker uses to determine whether an endpoint is healthy depends on the type of health checks: HTTP and HTTPS health checks. TCP health checks or HTTP and HTTPS health checks with string matching. Regardless your specific query where we are having more than 2 servers for the website, AWS docs states that... When you have more than one resource performing the same function-for example, more than one HTTP server or mail server-you can configure Amazon Route 53 to check the health of your resources and respond to DNS queries using only the healthy resources. For, example, suppose your website, exapmle.com is hosted on six servers, two each in three data centers around the world. You can configure Route 53 to check the health of those servers and to respond to DNS queries for example.com using only the servers that are currently healthy.

*A company website is set to launch in the upcoming weeks. There is a probability that the traffic will be quite high during the initial weeks. In the event of a load failure, how can you set up DNS failover to a static website?* * Duplicate the exact application architecture in another region and configure DNS Weigh-based routing. * Enable failover to an on-premise data center to the application hosted there. * Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution. * Add more severs in case the application fails.

*Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution.* Amazon Route 53 health checks monitor the health and performance of your web applications, web servers and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealth resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a CloudFront distribution.

*There is a website hosted in AWS that might get a lot of traffic over the next couple of weeks. If the application experiences a natural disaster at this time, which of the following can be used to reduce potential disruption to users?* * Use an ELB to divert traffic to an infrastructure hosted in another region. * Use an ELB to divert traffic to an Infrastructure hosted in another AZ. * Use CloudFormation to create backup resources in another AZ. * Use Route53 to route to a static web site.

*Use Route53 to route to a static web site.* In a disaster recovery scenario, the best choice out of all given options is to divert the traffic to a static website. The working "to reduce the potential disruption in cases of issues" is pointing to a disaster recovery situation. There is more than 1 way to manage this situation. However, we need to choose the best option from the list given here. Out of this, the most suitable one is Option D. Most organization try to implement High Availability (HA) instead of DR to guard them against any downtime of services. In case of HA, we ensure there exists a fallback mechanism for our services. The service that runs in HA is handled by hosts running in different availability zones but in the same geographical region. This approach, however, does not guarantee that our business will be up and running in case the entire region goes down. DR takes thing to a completely new level, wherein you need to be able to recover from a different region that's separated by over 250 miles. Our DR implementation is an Active/Passive model, meaning that we always have minimum critical services running in different regions, but a major part of the infrastructure is launched and restored when required. ----------------------------------- Option A is wrong because ELB can only balance traffic in one region and not across multiple regions. Option B and C are incorrect because using backups across AZ's is not enough for disaster recovery purposes. *Note:* Usually, when we discuss a disaster recovery scenario we assume that the entire region is affected due to some disaster. So we need the serviced to be provided from yet another region. So in that case setting up a solution in another AZ will not work as it is in the same region. Option A is incorrect though it mentions yet another region because ELB's cannot span across regions. So out of the options provided Option D is the suitable solution.

*You have a requirement to host a static website for a domain called mycompany.com in AWS. It is required to ensure that the traffic is distributed properly. How can this be achieved?* (Choose 2) * Host the static site on an EC2 Instance. * Use Route53 with static web site in S3. * Enter the Alias records from Route53 in the domain registrar. * Place the EC2 instance behind the ELB.

*Use Route53 with static web site in S3* *Enter the Alias records from Route53 in the domain registrar* You can host a static website in S3. You need to ensure that the nameserver records for the Route53 hosted zone are entered in your domain registrar.

*You have developed a new web application on AWS for a real estate firm. It has a web interface where real estate employees upload photos of new construction hoses in S3 buckets. Prospective buyer's login to these web site & access photos. Marketing Team has initiated an intensive marketing event to promote new housing schemes which we will lead to buyer's frequently accessing these images. As this is a new application you have no projection of traffic. You have created Auto Scaling across mulitple instance types for these web servers, but you also need to optimised cost for storage. You don't want to compromise on latency & all images should be downloaded instantaneously without any outage. Which of the following is recommeded storage solution to meet this requirement?* * Use One Zone-IA storage clas to store all images. * Use Standard-IA to store all images. * Use S3 Intelligent-Tiering storage class. * Use Standard storage class, use Storage class analytics to identify & move objects using lifecycle policies.

*Use S3 Intelligent-Tiering storage class.* When access pattern to web application using S3 storage buckets is unpredictable, you can use S3 intelligent-Tiering storage class. S3 Intelligent-Tiering storage class include two access tiers: frequent access and infrequent access. Based upon access patterns it moves data between these tiers which helps in cost saving. S3 Intelligent-Tiering storage class have the same performance as that of Standard storage class. ----------------------------------- Option A is incorrect as all though it will save cost, but it will not provide any protection in case of AZ failure. Also, this class is suitable for infrequenlty accessed data & not for frequentlya ccess data. Option B is incorrect as Standard-IA storage class is in for infrequently accessed data & there are retrieval charges associated. In above requirement you do not have any projections of data being access which may result in higher costs. Option D is inocrrect it has operational overhead to setup Storage class analytics & move objects betwen various classes. Also, since access pattern is undetermined, this will run into costlier option.

*A database hosted in AWS is currently encountering an extended number of write operations and is not able to handle the load. What can be done to the architecture to ensure that the write operations are not lost under any circumstances?* * Add more IOPS to the existing EBS Volume used by the database. * Consider using DynamoDB instead of AWS RDS. * Use SQS FIFO to queue the database writes. *Use SNS to send notification on missed database writes and then add them manually.

*Use SQS FIFO to queue the database writes.* SQS Queues can be used to store the pending database writes, and these writes can then be added to the database. It is the perfect queuing system for such architecture. Note that adding more IOPS may help the situation but will not totally eliminate the chances of losing database writes. *Note:* The scenario in the question is that the database is unable to handle the write operations and the requirement is that without losing any data we need to perform data writes on the database. For this requirement, we can use SQS queue to store the pending write requests, which will ensure the delivery of these messages. Increasing the IOPS can handle the traffic bit more efficiently but it has a limit of 40,000 IOPS where as SQS queues can handle 120,000 messages inflight.

*You have deployed an e-commerce application on EC2 servers. Sometimes the traffice becomes unpredictable when a large number of users log in to your web site at the same time. To handle this amount of traffic, you are using Auto Scaling and elastic load balancing along with EC2 servers. Despite that, sometimes you see that Auto Scaling is not able to quickly spin off additional servers. As a result, there is performance degradation, and your application is not able to capture all the orders properly. What can you do so that you don't lose any orders?* * Increase the size of EC2 instance. * Use SQS to decouple the ordering process. Keep the new orders in the queue and process them only when the new EC2 instance is available. * Increase the limit of EC2 instance. * Double the number of EC2 instances over what you are using today.

*Use SQS to decouple the ordering process. Keep the new orders in the queue and process them only when the new EC2 instance is available.* In this scenario, the main problem is you are losing the orders since the system is not able to process them in time. By using SQS and decoupling the order process, you won't lose any orders. ----------------------------------- Increasing the size of the EC2 instance or doubling the EC2 instance to begin with won't help because the number of users is unpredictabe. Limiting the EC2 instance is not a problem here since Auto Scaling is not able to quickly spin off additional servers.

*You plan on hosting the application on EC2 Instances which will be used to process logs. The application is not very critical and can resume operation even after an interruption. Which of the following steps can help provide a cost-effective solution?* *Use Reserved Instances for the underlying EC2 Instances. * Use Provisioned IOPS for the underlying EBS Volumes. * Use Spot Instances for the underlying EC2 Instances. * Use S3 as the underlying data layer.

*Use Spot Instances for the underlying EC2 Instances* Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

*An application needs to access data in another AWS account in another VPC in the same region. Which of the following can be used to ensure that the data can be accessed as required?* * Establish a NAT instance between both accounts. * Use a VPN between both accounts. * Use a NAT Gateway between both accounts. * Use VPC Peering between both accounts.

*Use VPC Peering between both accounts.* A VPC Peering connection is a networking connection between two VPCs that enables you to route traffic between privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. ----------------------------------- Option A and C are incorrect because these are used when private resources are required to access the Internet. Option B is incorrect because it's used to create a connection between the On-premises and and AWS resources.

*A company has a set of resources hosted in an AWS VPC. Having acquired another company with its own set of resources hosted in AWS, it is required to ensure that resources in the VPC of the parent company can access the resources in the VPC of the child company. How can this be accomplished?* * Establish a NAT Instance to establish communication acrosss VPCs. * Establish a NAT Gateway to establish communication across VPCs. * Use a VPN Connection to peer both VPCs. * Use VPC Peering to peer both VPCs.

*Use VPC Peering to peer both VPCs.* A VPC Peering Connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering Connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS region. ----------------------------------- NAT Instance, NAT Gateway and VPN do not allow for VPC-VPC connectivity.

*You have created multiple VPCs in AWS for different business units. Each business unit is running its resources within its own VPC. The data needs to be moved from one VPC to other. How do you securly connect two VPCs?* * Use a VPN connection. * Use Direct Connect. * Use VPC peering. * Nothing needs to be done. Since both VPCs are inside the same account, they can talk to each other.

*Use VPC peering.* Using VPC peering, you can securly connect two VPCs. ------------------------- VPN and Direct Connect are used to connect an on-premise data center with AWS. Even if the VPCs are part of the same account, they can't talk to each other unless peered.

*You are running your web site behind a fleet of EC2 servers. You have designed your architecture to leverage multiple AZs, and thus your fleet of EC2 servers is hosted on multiple EC2 servers across different AZs. You are using EFS to provide file system access to the EC2 serves and to store all the data. You have integrated an application load balancer with EC2 instances, and whenever someone logs in to your web site, the application load balancer redirects the traffic to one of the EC2 servers. You deliver lots of photos and videos via your web site, and each time a user requests a photo or video, it is served via the EC2 instance. You are thinking of providing a faster turnaround time to end users and want to improve the user experience. How can you improve the existing architecture?* * Move all the photos and videos to Amazon S3. * Use a CoudFront distribution to cache all the photos and videos. * Move all the photos and videos to Amazon Glacier. * Add more EC2 instances to the fleet.

*Use a CloudFront distribution to cache all the photos and videos. By leveraging Amazon CloudFront, you can cache all the photos and videos close to the end user, which will provide a better user experience. ------------------------- Since you are looking for faster performance and better user experience, moving the files to Amazon S3 won't help. Amazon Glacier is the archiving solution and can't be used for storing files, which is required in real time. If you add more EC2 servers, it can handle more concurrent users on the web site, but it won't necessarily provide the performance boost you are looking for.

*You have instances hosted in a private subnet in a VPC. There is a need for the instances to download updates from the Internet. As an architect, what change would you suggest to the IT Operations team which would also be the most efficient and secure?* * Create a new public subnet and move the instance to that subnet. * Create a new EC2 Instance to download the updates separately and then push them to the required instance. * Use a NAT Gateway to allow the instances in the private subnet to download the updates. * Create a VPC link to the Internet to allow the instances in the private subnet to download the updates.

*Use a NAT Gateway to allow the instances in the private subnet to download the updates.* The NAT Gateway is an ideal option to ensure that instances in the private subnet have the ability to download updates from the Internet. ----------------------------------- Option A is not suitable because there may be a security reason for keeping these instances in the private subnet. (for example: db instances) Option B is also incorrect. The instances in the private subnet may be running various applications and db instances. Hence, it is not advisable or practical for an EC2 Instance to download the updates separately and then push them to the required instance. Option D is incorrect because a VPC link is not used to connect to the Internet.

*Your infrastructure in AWS currently consists of a private and public subnet. The private subnet consists of database servers and the public subnet has a NAT Instance which helps the instances in the private subnet to communicate with the Internet. The NAT Instance is now becoming bottleneck. Which of the following changes to the current architecture can help prevent this issue from occuring in the future?* * Use a NAT Gateway instead of the NAT Instance. * Use another Internet Gateway for better bandwidth. * Use a VPC connection for better bandwidth. * Consider changing the instance type for the underlying NAT Instance.

*Use a NAT Gatway instead of the NAT instance.* The NAT Gateway is a managed resource which can be used in place of a NAT Instance. While you can consider changing the instance type for the underlying NAT Instance, this does not guarantee that the issue will not re-occur in the future.

*You are deploying a three-tier architecture in AWS. The web servers are going to reside in a private subnet, and the database and application servers are going to reside in public subnet. You have chosen two AZs for high availability; thus, you are going to have two web servers, one in each AZ; two application servers, one in each AZ; and an RDS database in master standby mode where the standby database is running on a different AZ. In addition, you are using a NAT instance so that the application server and the database server can connect to the Internet if needed. You have two load balancers: one external load balancer connected to the web server and one internal load balancer connected to the application servers. What can you do to eliminate the single point of failure in this architecture?* * Use two internal load balancers * Use two external load balancer * Use a NAT gateway * Use three AZs in this architecture

*Use a NAT gateway* In this scenario, only the NAT instance is the single point of failure; if it goes down, then the application server and the database server won't be able to connect to the Internet. For high availability, you can also launch NAT instances in each AZ. Using a NAT gateway is preferred over using multiple NAT instances since it is managed service and it scales on its own. When using a NAT gateway, there is no single point of failure. ------------------------- The internal and external load balancers are not single points of failure, and since you are already using two different AZs in your current architecture, you don't have a single point of failure today.

*You are running a Redshift cluster in the Virginia region. you need to create another Redshift cluster in the California Region for DR purposes. How do you quickly create the Redshift DR cluster?* * Export the data to AWS S3, enable cross-regional replication to S3, and import the data to the Redshift cluster in California. * Use AWS DMS to replicate the Redshift cluster from one region to the other. * Use a cross-region snapshot to create the DR cluster. * Extend the existing Redshift cluster to the California region.

*Use a cross-region snapshot to create the DR cluster.* Redshift provides the ability to create a cross-region snapshot. You can leverage it for creating the DR cluster in a different region. ------------------------- Exporting the data to S3 and then reimporting it to a Redshift cluster is going to add a lot of manual overhead. If the functionality is available natively via Redshift, then why do you even look at DMS? Redshift clusters are specific to an AZ; you can't extend a Redshift cluster beyond one AZ.

*You have a compliance requirement that you should own the entire physical hardware and no other customer should run any other instance on the physical hardware. What option should you choose?* * Put the hardware inside the VPC so that no other customer can use it * Use a dedicated instance * Reserve the EC2 for one year * Reserve the EC2 for three years

*Use a dedicated instance* You can create the instance inside a VPC, but that does not mean other customers can't create any other instance in the physical hardware. Creating a dedicated instance is going to provide exactly what you are looking for. Reserving the EC2 instance for the instance for one or three years won't help unless you reserve it as a dedicated instance.

*A company is planning on a Facebook-type application where users will upload videos and images. These are going to be stored in S3. There is a concern that there could be an overwhelming number of read and write requests on the S3 bucket. Which of the following could be an implementation step to help ensure optimal performance on the underlying S3 storage bucket?* * Use a sequential ID for the prefix * Use a hexadecimal hash for the prefix * Use a hexadecimal hash for the suffix * Use a sequential ID for the suffix

*Use a hexadecimal has for the prefix* This recommendation for increasing performance if you have a high request rate in S3 is given in the AWS documentation. *Note:* First of all, Question doesn't mention about the "request rate of read and write(exact number)" and AWS mentioned in this same document as follows. AWS says "Applications running on Amazon S3 today will enjoy this performance improvement with no changes, and customers building new applications on S3 do not have to make any application customizations to achieve this performance. Amazon S3's support for parallel requests means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application. *Performance scales per prefix*, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the *number of prefixes*.

*A million images are required to be uploaded to S3. What option ensures optimal performance in this case?* * Use a sequential ID for the prefix. * Use a hexadecimal hash for the prefix. * Use a hexadecimal hash for the suffix. * Use a sequential ID for the suffix.

*Use a hexadecimal hash for the prefix.* Amazon S3 maintains an index of object key names in each AWS region. Object keys are stored in UTF-8 binary ordering across multiple partitions in the index. The key name determines which partition the key is stored in. Using a sequential prefix, such as a timestamp or an alphabetical sequence, increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, which can overwhelm the I/O capacity of the partition. If your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By introducing randomness to your key names, the I/O load is distributed across multiple index partition. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key, and add three or four characters from the hash as prefix to the key name.

*You are creating an internal social network web site for all your employees via which they can collaborate, share videos and photos, and store documents. You are planning to store everything in S3. Your company has more than 100,000 employees today, so you are concerned that there might be a performance problem with S3 due to a large number of read and write requests in the S3 bucket based on the number of employees. What can you do to ensure optimal performance in S3 bucket?* * Use a hexadecimal hash string for the suffix. * Use a sequential ID for the prefix. * Use a sequential ID for the suffix. * Use a hexadecimal hash string for the prefix.

*Use a hexadecimal hash string for the prefix* Using a hexadecimal hash string as a prefix is going to introduce randomness to the key name, which is going to provide maximum performance benefits. ------------------------- Using a sequential prefix, such as a timestamp or an alphabetical sequence, increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O capacity of the partition.

*You are using an m1.small EC2 Instance with one 300GB EBS General purpose SSD volume to host a relational database. You determined that write throughput to the database need to be increased. Which of the following approaches can help you achieve this?* (Choose 2) * Use a larger EC2 Instance * Enable Multi-AZ feature for the database * Consider using Provisioned IOPS Volumes. * Put the database behind an Elastic Load Balancer

*Use a larger EC2 Instance* *Consider using Provisioned IOPS Volumes.* Highest performance SSD volume designed for latency-sensitive transactional workloads. ----------------------------------- Option B is incorrect since the Multi-AZ feature is only for high availability Option D is incorrect since this would not alleviate the high number of writes of the database.

*What is the best way to get better performance for storing several files in S3?* * Create a separate folder for each file * Create separate buckets in different region * Use a partitioning strategy for storing the files * Use a partitioning strategy for storing the files

*Use a partitioning strategy for storing the files* Creating a separate folder does not improve performance. What if you need to store millions of files in these separate folders? Similarly, creating separate folders in a different region does not improve the performance. There is no such rule of storing 100 files per bucket.

*A company wants to host a web application and a database layer in AWS. This will be done with the use of subnets in a VPC. Which of the following is a proper architectural design for supporting the required tiers of the application?* * Use a public subnet for the web tier and a public subnet for the database layer. * Use a public subnet for the web tier and a private subnet for the database layer. * Use a private subnet for the web tier and a private subnet for the database layer. * Use a private subnet for the web tier and a public subnet for the database layer.

*Use a public subnet for the web tier and a private subnet for the database layer.* The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet.

*You have developed an application, and it is running on EC2 servers. The application needs to run 24/7 throughout the year. The application is critical for business; therefore, the performance can't slow down. At the same time, you are looking for the best possible way to optimize your costs. What should be your approach?* * Use EC2 via the on-demand functionality, and shut down the EC2 instance at night when no one is using it. * Use a reserved instance for one year. * Use EC2 via the on-demand functionality, and shut down the EC2 instance on the weekends. * Use a spot instance to get maximum pricing advantage.

*Use a reserved instance for one year.* Since you know that the application is going to run 24/7 throughout the year, you should choose a reserved instance, which will provide you with the cheapest cost. ------------------------- Since the business needs to run 24/7, you can't shut down at night or on the weekend. You can't use a spot instance because if someone overbids you, the instance will be taken away.

*You want to use S3 for the distribution of your software, but you want only authorized users to download the software. What is the best way to achieve this?* * Encrypt the S3 bucket. * Use a signed URL. * Restrict the access via CloudFront. * Use the IAM role to restrict the access.

*Use a signed URL* When you move your static content to an S3 bucket, you can protect it from unauthorized access via CloudFront signed URLs. A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This is how the signed URL works. The web server obtains a temporary credentials to the S3 content. It creates a signed URL based on those credentials that allows access. It provides this link in the content returned (signed URL) to client, and this link is valid for a limited period of time. ------------------------- Encryption is different from access. You can't restrict user-level access via CloudFront. You can't do this via an IAM role.

*You want to use S3 for the distribution of your software, but you want only authorized users to the download the software. What is the way to achieve this?* * Encrypt the S3 bucket. * Use a signed URL. * Restrict the access via CloudFront. * Use the IAM role to restrict the access.

*Use a signed URL.* When you move your static content to an S3 bucket, you can protect it from unauthorized access via CloudFront signed URLs. A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This is how signed URL works. The web server obtains temporary credentials to the S3 content. It creates a signed URL based on those credentials that allows access. It provides this link in the content returned (signed URL) to client, and this link is valid for a limited period of time. ----------------------------------- Encryption is different from access. You can't restrict user-level access via CloudFront. You can't do this via an IAM role.

*You are running your mission-critical production database using a multi-AZ architecture in Amazon RDS with PIOPS. You are going to add some new functionality. What can you do to minimize the cost?* (Choose two.) * Use a single AZ. * Use multiple AZs. * Don't use PIOPS. * Create a read replica and give the developers the read replica.

*Use a single AZ.* *Don't use PIOPS.* Since you need the database for development purposes, you don't need built-in high availability and provisioned IOPS. Therefore, you can skip multi-AZs and PIOPS. ----------------------------------- Using multiple AZs is going to increase your costs. A read replica won't help since the read replica remains in read-only mode. The developer needs the database in read-write mode.

*You are running some tests for your homegrown application. Once the testing is done, you will have a better idea of the type of server needed to host the application. Each test case runs typically for 45 minutes and can be restarted if the server goes down. What is the cost-optimized way of running these tests?* * Use a spot instance for running the test. * Use an on-demand instance with a magnetic drive for running the test. * Use an on-demand instance EC2 instance with PIOPS for running the test. * Use an on-demand instance with an EBS volume for running the test.

*Use a spot instance for running the test.* You are looking to optimize the cost, and a spot instance is going to provide the maximum cost benefit. The question also says that each test case runs for 45 minutes and can be restarted if failed which is again a great candidate for a spot instance. ------------------------- If you use an on-demand instance with magnetic drive or EBS or PIOPS for running the test, it is going to cost more.

*One of the leading entertainment channels is hosting an audition for a popular reality show in India. They have published an advertisement to upload images of the participants with certain criteria on their website which is hosted in AWS infrastructure. The business requirement of the channel states that the participants with green card entry should be given a priority and results for them will be released first. However, results for the rest of the users will be released at their (channel's) convenience. Being a popular reality show, the number of requests coming to website will increase before the deadline, therefore the solution needs to be scalable and cost effective. Also, the failure of any layer should not affect the other layer in a multitier environment, in the AWS infrastructure. The technical management has given you few guidelines about the architecture, and they want Web components should participants to upload the images on S3 bucket. However, the second component will process these images and store it back to the S3 bucket, by making entries to the database storage. As a solutions architect for the entertainment channel, how would you design the solution, considering the priority for the participants is maintained and data is processed and stored as per the requirement. * Use web component to get images and store them on S3 bucket. Have SQS service read these images with two SQS queues, green card entry queue and non-green card entry queue and EC2 instances with Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to Amazon RDS database. * Use web component to get the images and store them on S3 bucket. Have the SQS service read these images with two SQS queues one green card entry queue and non-green card entry queue and EC2 instances with Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to Amazon RedShift database. * Use web component to get the images and store them on S3 bucket. Have the SQS service read these images with two SQS queues both non-green card entry queues and the fleet of EC2 instances with Auto Scaling group, will poll these queues based on the flags of priority and process these images based on priority requirement and store them to another S3 bucket, making an entry to DynamoDB database. * Use web component to get the images and store them in S3 bucket. Have the SQS service read these images with two SQS queues one priority and another standard queues. A fleet of EC2 instances with the Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to the Amazon DynamoDB database.

*Use a web component to get the images and store them in S3 bucket. Have the SQS service read these images with two SQS queues one priority and another standard queue. A fleet of EC2 instances with the Auto Scaling group, will poll these queues and process these images based on priority requirement and store them to another S3 bucket, making an entry to the Amazon DynamoDB database. Option D is correct, as it suits all requirements mentioned to make the solution decoupled, means even if one web component fails the database component will always be up and processing these images by reading it from the SQS queues based on the priority queue for its green card participants and standard queue for the general participants and fleet of EC2 instances with AUTO scaling will be able to take up the load during peak time. And data will be stored in another S3 bucket making an entry to DynamoDB table. ----------------------------------- Option A is incorrect, though the solution provides a decoupling but the final metadata updates output cannot be inside the Amazon RDS as it holds the transactional data. Option B is incorrect, though the solution provides the decoupling as well as web component and processing part looks good but the final metadata entries cannot be made to the Amazon RedShift which is an analytics databases works on OLAP. Option C is incorrect, the solution works well with web component, however the SQS queues used are standard queue, though these standard queues has ability to process the data separately based on green card entry and normal participants but it does not ensure priority, as both queues will be read simultaneously, hence this will not serve the needed requirement. However, storing metadata to DynamoDB table will work fine.

*Your application needs a shared file system that can be accessed from multiple EC2 instances across different AZs. How would you provision it?* * Mount the EBS volume across multiple EC2 instances * Use an EFS instance and mount the EFS across multiple EC2 instances across multiple AZs * Access S3 from multiple EC2 instances * Use EBS with Provisioned IOPS

*Use an EFS instance and mount the EFS across multiple EC2 instances across multiple AZs* Use an EFS. The same EBS volume can't be mounted across multiple EC2 instances.

*Your developers are in the process of creating a new application for your business unit. The developers work only on weekdays. To save costs, you shut down the web server (EC2 server) on the weekend and again start them on Monday. Every Monday the developers face issues while connecting to the web server. The client via which they connect to the web server stores the IP address. Since the IP address changes every week, they need to reconfigure it. What can you do to fix the problem for developers? Since your main intention is saving money, you can't run the EC2 servers over the weekend.* * Use an EIP address with the web server. * Use an IPv6 address with the web server. * Use an ENI with the web server. * Create the web server in the private subnet.

*Use an EIP address with the web server.* When you assign an EIP, you get a static IP address. The developers can use this EIP to configure their client. ------------------------- An IPv6 IP address is also dynamic, and you don't have control over it. ENI is a network interface and has nothing to do with IP addresses. If you can create the web server in a private subnet, every time you shut it down, it is going to get a new IP address.

*You are running a couple of social media sites in AWS, and they are using databases hosted in multiple AZs via RDS MySQL. With the expansion, your users have started seeing degraded performance mainly with the database reads. What can you do to make sure you get the required performance?* (Choose two.) * Use an ElastiCache in-memory cache in each AZ hosting the database. * Create a read replica of RDS MySQL to offload read-only traffic. * Migrate the database to the largest size box available in RDS. * Migrate RDS MySQL to an EC2 server.

*Use an ElastiCace in-memory cache in each AZ hosting the database* *Creat a read replica of RDS MySQL to offload read-only traffic* Creating ElastiCache and a read replica is going to give you an additional perofmrance boost for the workload. ------------------------- Since the bottleneck is at the read-only traffic, adding a read replica or in-memory cache will solve the problem. If you have migrated the database to the largest possible box available in RDS but the problem occurs again, what do you do? In that case, you should add a read replica or in-memory cache. If you are running a database in RDS, you should not move that to EC2 since you can get a lot of operational benefits simply by hosting your database in RDS.

*What is the best way to delete multiple objects from S3?* * Delete the files manually using a console * Use multi-object delete * Create a policy to delete multiple files * Delete all the S3 buckets to delete the files

*Use multi-object delete* Manually deleting the files from the console is going to take a lot of time. You can't create a policy to delete multiple files. Deleting buckets in order to delete files is not a recommended option. What if you need some files from the bucket?

*A company owns an API which currently gets 1000 requests per second. The company wants to host this in a cost effective manner using AWS. Which of the following solution is best suited for this?* * Use API Gateway with the backend services as it is. * Use the API Gateway along with AWS Lambda. * Use CloudFront Along with the API backend service as it is. * Use ElastiCache along with the API backend service as it is.

*Use the API Gateway along with the AWS Lambda* Since the company has full ownership of the API, the best solution would be to convert the code for the API and use it in a Lambda function. This can help save on costs, since in the case of Lambda you only pay for the time the function runs, and not for the infrastructure. Then, you can use the API Gateway with AWS Lambda function to scale accordingly. *Note:* With Lambda you do not have to provision your own instances. Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code running a web service front end, and monitoring and logging your code. AWS Lambda provides easy scaling and high availability to yoru code without additional effort on your part.

*Your company wants to enable encryption of services such as S3 and EBS volumes so that the data it maintains is encrypted at rest. They want to have complete control over the keys and the entire lifecycle around the keys. How can you accomplish this?* * Use the AWS CloudHSM * Use the KMS service * Enable S3 server-side encryption * Enable EBS Encryption with the default KMS keys

*Use the AWS CloudHSM* AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. ----------------------------------- Option B, C and D are incorrect since here the keys are maintained by AWS.

*A company needs to extend their storage infrastructure to the AWS Cloud. The storage needs to be available as iSCSI devices for on-premises application servers. Which of the following would be able to fulfill this requirement?* * Create a Glacier vault. Use a Glacier Connector and mount it as an iSCSI device. * Create an S3 bucket. Use S3 Connector and mount it as an iSCSI device. * Use the EFS file service and mount the different file systems to the on-premises servers. * Use the AWS Storage Gateway-cached volumes service.

*Use the AWS Storage Gateway-cached volume service.* By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.

*A company is planning to use AWS ECS service work with containers. There is a need for the least amount of adminstrative overhead while launching containers. How can this be achieved?* * Use the Fargate launch type in AWS ECS. * Use the EC2 launch type in AWS ECS. * Use the Auto Scaling launch type in AWS ECS. * Use the ELB launch type in AWS ECS.

*Use the Fargate launch type in AWS ECS.* The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you.

*Your company has a web application hosted in AWS that makes use of an Application Load Balancer. You need to ensure that the web application is protected from web-based attacks such as cross site scripting etc. Which of the following implementation steps can help protect web applications from common security threats from the outside world?* * Place a NAT instance in front of the web application to protect against attacks. * Use the WAF service in front of the web application. * Place a NAT gateway in front of the web application to protect against attacks. * Place the web application in front of a CDN service instead.

*Use the WAF service in front of the web application.* AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining csutomizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. ----------------------------------- Option A and C are incorrect because these are used to allow instances in your private subnet to communicate with the internet. Option D is incorrect since this is ideal for content distribution and good when you have DDos attacks, but the WAF should be used for concentrated types of web attacks.

*You are running a MySQL database in RDS, and you have been tasked with creating a disaster recovery architecture. What approach is easiest for creating the DR instance in a different region?* * Create an EC2 server in a different region and constantly replicate the database over there. * Create an RDS database in the other region and use third-party software to replicate the data across the database. * While installing the database, use multiple regions. This way, your database gets installed into multiple regions directly. * Use the cross-regional replication functionality of RDS. This will quickly spin off a read replica in a different region that can be used for disaster recovery.

*Use the cross-regional replication functionality of RDS. This will quickly spin off a read replica in a different region that can be used for disaster recovery.* You can achieve this by creating an EC2 server in a different region and replicating, but when your primary site is running on RDS, why not use RDS for the secondary site as well? You can use third-party software for replication, but when the functionality exists out of the box in RDS, why pay extra to any third party? You can't install a database using multiple regions out of the box.

*You have created an instance in EC2, and you want to connect to it. What should you do to log in to the system for the first time?* * Use the username/password combination to log in to the server * Use the key-pair combination (private and public keys) * Use your cell phone to get a text message for secure login * Log in via the root user

*Use the key-pair combination (private and public keys)* The first time you log in to an EC2 instance, you need the combination of the private and public keys. You won't be able to log in using a username and password or as a root user unless you have used the keys. You won't be able to use multifactor authentication until you configure it.

*You are running your application on a bunch of on-demand servers. On weekends you have to kick off a large batch job, and you are planning to add capacity. The batch job you are going to run over the weekend can be restarted if it fails. What is the best way to secure additional compute resources?* * Use the spot instance to add compute for the weekend * Use the on-demand instance to add compute for the weekend * Use the on-demand instance plus PIOPS storage for the weekend resource * Use the on-demand instance plus a general-purpose EBS volume for the weekend resource

*Use the spot instance to add compute for the weekend* Since you know the workload can be restarted from where it fails, the spot instance is going to provide you with the additional compute and pricing benefit as well. You can go with on-demand as well; the only thing is you have to pay a little bit more for on-demand than for the spot instance. You can choose a PIOPS or GP2 with the on-demand instance. If you choose PIOPS, you have to pay much more compared to all the other options.

*Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of Spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How would you implement such a system?* * Use a DynamoDB table with an attribute defining the priority level. Transform instances will scan the table for tasks, sorting the results by priority level. * Use Route 53 latency-based routing to send high priority tasks to the closest transformation instances. * Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high prioirty queue; if there is no message, they poll the default priority queue. * Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

*Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.* The best way to use 2 SQS queues. Each queue can be polled separately. The high priority queue can be polled first.

*You are using Amazon RDS as a relational database for your web application in AWS. All your data stored in Amazon RDS is encrypted using AWS KMS. Encrypting this data is handled by a separate team of 4 users (User A, B, C, & D) in Secuirty Team. They have created 2 CMK's for encryption of data. During annual Audit, there were concerns raised by Auditors for access to these CMK's for each user. Security Team has the following IAM Policy & Key Policy set for AWS KMS.* * -CMK1 is created by AWS KMS API & has a default Key policy.* * -CMK2 is default key policy created by AWS Mananagement console & allows User D.* * -User C has IAM Policy denying all action for CMK1 while allowing for CMK2.* * -User A & User B has IAM Policy allowing access to CMK1 while denying access to CMK2.* * -User D has IAM policy allowing full access to AWS KMS.* *Which of the following is correct statement for access each user has for AWS KMS CMK?* * User A & B can use the only CMK1, user C cannot use CMK1, while user D can use both CMK1 & CMK2. * User A & B can use CMK1 & CMK2, user C can use only CMK2, while user D can use both CMK1 & CMK2. * User A & B can use CMK1, user C can use CMK1 & CMK2, while user D can use both CMK1 & CMK2. * User A & B can use only CMK1, user C can use only CMK2, while user D cannot use both CMK1 & CMK2

*User A & B can use the only CMK1, user C cannot use CMK1, while user D can use both CMK1, & CMK2.* Access to AWS KMS CMK is a combination of both Key policy & IAM policy. IAM Policy should grant access to a user for AWS KMS. While Key Policy is used to control access to CMK in AWS KMS. ----------------------------------- Option B is incorrect as CMK2 key policy do not grant access to User C. Also, User A & B do not have IAM policy to access CMK2. Option C is incorrect as CMK2 key policy do not grant access to User C. Also, it does not have IAM policy to acces CMK1. Option D is incorrect as User D has IAM policy & Key Policy to use both CMK1 & CMK2.

*For implementing security features, which of the following would you choose?* (Choose 2) * Username/password * MFA * Using multiple S3 buckets * Login using the root user

*Username/password* *MFA* Using multiple buckets won't help in terms of security. Similarly, leveraging multiple regions won't help to address the security.

*Your company runs an automotible reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires a new instances of the Auto Scaling group to identify public and private IP addresses. You need to inform the development team on how they can achieve this. Which of the following advice would you give to the development team?* * By using Ipconfig for windows or Ifconfig for Linux. * By using a cloud watch metric. * Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data * Using a Curl or Get Command to get the latest user-data from http://169.254.169.254/latest/user-data

*Using a Curl or Get Command to get the latest meta-data from http://169.254.169/254/meta-data/* ----------------------------------- Option A is partially correct, but is an overhead when you already have the service runnin in AWS. Option B is incorrect, because you cannot get the IP address from the cloudwatch metric. Option D is incorrect, because user-data cannot get the IP addresses.

*Your company is doing business in North America, and all your customers are based in the United States and Canada. You are using us-east as a primary region and using the us-west region for disaster recovery. You have a VPC in both regions for hosting all the applications supporting the business. On weekends you are seeing a sudden spike in traffic from China. While going through the log files, you find out that users from China are scanning the open ports to your server. How do you restrict the user from China from connecting to your VPC?* * Using a VPC endpoint * Use CloudTrail * Using security groups * Using a network access control list

*Using a network access control list* You can explicitly deny the traffic from a particular IP address or from a CIDR block via an NACL ----------------------------------- You can't explicitly deny traffic using security groups. A VPC endpoint is used to communicate privately between a VPC and services such as S3 and DynamoDB. Using CloudTrail, you can find the trail of API activities, but you can't block any traffic.

*You need to restore an object from Glacier class in S3. Which of the following will help you do that?* (Choose 2) * Using the AWS S3 Console * Using the S3 REST API * Using the Glacier API * Using the S3 subcommand from AWS CLI

*Using the AWS S3 Console* *Using the S3 REST API* When discussing GLACIER it is important to distinguish between the storage-class 'Glacier' use by S3, and the 'S3-Glacier' service. The 1st is managed via the 'S3' console & API, and the 2nd the 'S3-Glacier' console & API. The Amazon 'S3' service maintains the mapping between your user-defined object name and Amazon Glacier system-defined identifier. These objects are not accessible via the 'S3-Glacier' service. Objects that are stored using the 'S3-Glacier' service are only accessible through the Amazon 'S3' service console or APIs.

*A customer wants to import their existing virtual machines to the cloud. Which service can they use for this?* * VM Import/Export * AWS Import/Export * AWS Storage Gateway * DB Migration Service

*VM Import/Export* VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2. Few strategies used for migrations are: 1. Forklift migration strategy 2. Hybrid migration strategy 3. Creating AMIs *AWS Import/Export* - is a data transport service used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport. *AWS Storage Gateway* - connects an on-premises software applicance with cloud-based storage to provide seamless integration with data security features between on-premises IT environment and the AWS storage infrastrcture. The gateway provides access to objects in S3 as files or file share mount points. *DB Migration Service* - can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneious migrations between different database platforms, such as Oracle to Amazon Aurora.

*There is a requirement for EC2 Instances in a private subnet to access an S3 bucket. It is required that the traffic does not traverse to the Internet. Which of the following can be used to fulfill this requirement?* * VPC Endpoint * NAT Instance * NAT Gateway * Internet Gateway

*VPC Endpoint* A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connection connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and other service does not leave the Amazon network.

*You have hosted lots of resources in the AWS cloud in a VPC. Your IT security team wants to monitor all the traffic from all the ntework interfaces in the VPC. Which service should you use to monitor this?* * CloudTrail * CloudWatch * VPC Flow Logs * EC2 instance server logs

*VPC Flow Logs* Flow Logs is used to monitor all the network traffic within a VPC. ------------------------- CloudTrail is used to monitor API actions, and CloudWatch used to monitor everything in general in the cloud. EC2 server logs won't provide the information you are looking for.

*There is a requirement to get the IP addresses for resources accessed in a private subnet. Which of the following can be used to fulfill this purpose?* * Trusted Advisor * VPC Flow Logs * Use CloudWatch metrics * Use CloudTrail

*VPC Flow Logs* VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. ----------------------------------- AWS Trusted Advisor is your customized cloud expert! It helps you observe best practices for the use of AWS by inspecting your AWS environment with an eye toward saving money, improving system performance and reliability, and closing security gaps. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Cloud watch Metric is mainly for used for performance metrics.

*A company has a set of resources hosted in a VPC on the AWS Cloud. The IT Security department has now mandated that all IP traffic from all network interfaces in the VPC be monitored. Which of the following would help suffice this requirement?* * Trusted Advisor * VPC Flow Logs * Use CloudWatch metrics * Use CloudTrail

*VPC Flow Logs* VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

*Which of the following are a part of AWS' Network and Content Delivery services?* (Choose 2) * VPC * RDS * EC2 * CloudFront

*VPC* *CloudFront* VPC allow you to provision a logically isolated section of the AWS where you can launch AWS resource in a virtual network. Cloudfront is fast, highly secure and programmable content delivery network (CDN). ------------------------- EC2 provides compute resources while RDS is Amazon's Relational Database System.

*A company has a set of VPC's defined in AWS. They need to connect this to their on-premise network. They need to ensure that all data is encrypted in transit. Which of the following would you use to connect the VPC's to the on-premise networks?* * VPC Peering * VPN connections * AWS Direct Connect * Placement Groups

*VPN connections* By default, instances that you launch into an Amazon VPC can't communicate with your own (remote) network. You can enable access to your remote network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, and creating an AWS managed VPN connection. ----------------------------------- Option A is incorrect because this is used to connect multiple VPC's together. Option C is incorrect because this does not encrypt traffic in connections between AWS VPC's and the On-premises network. Option D is incorect because this is used for low latency access between EC2 instances.

*You want to connect your on-premise data center to AWS using VPN. What things do you need to initiate a VPN connection?* (Choose two.). * AWS Direct Connect * Virtual private gateway * Customer gateway * Internet gateway

*Virtual private gateway* *Customer gateway* A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection, whereas a customer gateway is the VPN concentrator at your on-premise data center. ------------------------- The question is asking for a solution for VPN. Direct Connect is not a VPN connection; rather, it is dedicated connectivity. An Internet gateway is used to provide Internet access.

*To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information. Select all correct statements about that.* (Choose 2) * If your Lambda function needs to access both VPC resources and public Internet, the VPC needs to have a NAT instance inside your VPC, you can use the Amazon VPC NAT gateway or you can use an Internet gateway attached to your VPC. * When you add VPC configuration to a Lambda function, it can only access resources in that VPC. However, you can specify multiple VPC using the VpcConfig parameter. Simply comma separate the VPC subnet and security group IDs. * AWS Lambda uses the provided VPC-specific configuration information to set up elastic network interfaces. Therefore, your Lambda function execution role must have permissions to create, describe and delete these. * AWS Lambda does also support connecting to resources within Dedicated Tenancy VPCs.

*When you add VPC configuration to a Lambda function, it can only access resources in that VPC. However, you can specify multiple VPC using the VpcConfig parameter. Simply comma separate the VPC subnet and security group IDs.* *AWS Lambda uses the provided VPC-specific configuration information to set up elastic network interfaces. Therefore, your Lambda function execution role must have permissions to create, describe and delete these.* ----------------------------------- AWS Lambda does not support connecting to resources within Dedicated Tenancy VPCS. If your Lambda function requires Internet access, you cannot use an Internet gateway attached to your VPC since that requires the ENI to have public IP addresses.

*You are architecting an internal-only application. How can you make sure the ELB does not have any Internet access?* * You detach the Internet gateway from the ELB. * You create the instances in the private subnet and hook up the ELB with that. * The VPC should not have any Internet gateway attached. * When you create the ELB from the console, you can define whether it is internal or external.

*When you create the ELB from the console, you can define whether it is internal or external.* You can't attach or detach an Internet gateway with ELB, even if you create the instances in private subnet, and if you create an external-facing ELB instance, it will have Internet connectivity. The same applies for VPC; even if you take an IG out of the VPC but create ELB as external facing, it will still have Internet connectivity.

*When you create a new user, that user ________.* * Will only be able to log in to the console in the region in which that user was created. * Will be able to interact with AWS using their access key ID and secret access key using the API, CLI, or the AWS SDKs. * Will be able to log in to the console only after multi-factor authentication is enabled on their account. * Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.

*Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.* ----------------------------------- To access the console you use an account and password combination. To access AWS programmatically you use a Key and Secret Key combination.

*What is the underlying Hypervisor for EC2?* (Choose 2) * Xen * Nitro * Hyper-V * ESX * OVM

*Xen* *Nitro* Until very recently AWS exclusively used Xen Hypervisors, Recently they started making use of Nitro Hypervisors.

*Can a placement group be deployed across multiple Availability Zones?* * Yes. * Only in Us-East-1. * No. * Yes, but only using the AWS API.

*Yes* Technically they are called Spread or Partition placement groups. Now you can have placement groups across different hardware and multiple AZs.

*If an Amazon EBS volume is an additional partition (not the root volume), can I detach it without stopping the instance?* * No, you will need to stop the instance. * Yes, although it may take some time.

*Yes, although it may take some time.*

*Is it possible to perform actions on an existing Amazon EBS Snapshot?* * It depends on the region. * EBS does not have snapshot functionality. * Yes, through the AWS APIs, CLI, and AWS Console. * No.

*Yes, through the AWS APIs, CLI, and AWS Console.*

*You are hosting your critical e-commerce web site on a fleet of EC2 servers along with Auto Scaling. During the weekends you see a spike in the system, and therefore in your Auto Scaling group you have specified a minimum of 10 EC2 servers and a maximum of 40 EC2 servers. During the weekend you noticed some slowness in performance, and when you did some research, you found out that only 20 EC2 servers had been started. Auto Scaling is not able to scale beyond 20 EC2 instances. How do you fix the performance problem?* * Use 40 reserve instances. * Use 40 spot instances. * Use EC2 instances in a different region. * You are beyond the EC2 service limit. You need to increase the service limit.

*You are beyond the EC2 service limit. You need to increase the service limit.* An EC2 instance has a service limit of 20. When this limit is exceeded, you can't provision any new EC2 instance. You can log a support ticket to raise the limit. ------------------------- A reserve instance is not recommended when you don't know how much capacity is needed. A spot instance is not a great use case since you are running a critical e-commerce web site. If you use EC2 instances in a different region, then you will have a performance issue.

*Your company has a set of AWS RDS Instances. Your management has asked you to disable Automated backups to save on costs. When you disable automated backups for AWS RDS, what are you compromising on?* * Nothing, you are actually savings resources on aws * You are disabling the point-in-time recovery * Nothing really, you can still take manual backups * You cannot disable automated backups in RDS

*You are disabling the point-in-time recovery.* Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. You can set the backup retention period when you create a DB instance. If you don't set the backup retention period, Amazon RDS uses a default period retention period of one day. You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days.

*You are doing an audit for a company, and doing the audit, you find that the company has kept lots of log files in a public bucket in Amazon S3. You try to delete them, but you are unable to do it. What could be the reason for this?* * The log files in the buckets are encrypted * The versioning is enabled in the bucket * You are not the owner of the bucket; that's why you can't delete them * Only the employee of the company can delete the object

*You are not the owner of the bucket; that's why you can't delete them* You can't delete a bucket that you don't own. ------------------------- It doesn't matter whether objects in the bucket are encrypted or versioning is enabled. The files can be deleted by the owner of the bucket. Since you are not th eowner of the bucket, that's why you can't delete files. If a company has 100,000 employees, do you want all of them to be able to delete the objects? Of Course not.

*What is the common use case for a storage gateway?* (Choose two.) * You can integrate the on-premise environment with AWS. * You can use it to move data from on-premise to AWS. * You can create a private AWS cloud on your data center. * It can be used in liu of AWS Direct Connect.

*You can integrate the on-premise environment with AWS.* *You can use it to move data from on-premise to AWS.* You can use an AWS storage gateway to integrate on-premise environments with those running on AWS and use them to transfer data from on-premise to the AWS Cloud. ------------------------- AWS is a public cloud provider, and the AWS services can't be used in a private cloud model; that concept simply does not exist. AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1GB or 10GB fiber-optic Ethernet cable. With this connection in place, you can create virtual interfaces directly to the AWS cloud and Amazon VPC, bypassing Internet service providers in your network path.

*You are a security administrator working for a hotel chain. You have a new member of staff who has started as a systems administrator, and she will need full access to the AWS console. You have created the user account and generated the access key id and the secret access key. You have moved this user into the group where the other administrators are, and you have provided the new user with their secret access key and their access key id. However, when she tries to log in to the AWS console, she cannot. Why might that be?* * Your user is trying to log in from the AWS console from outside the corporate network. This is not possible. * You have not yet activated multi-factor authentication for the user, so by default they will not be able to log in. * You cannot log in to the AWS console using the Access Key ID / Secret Access Key pair. Instead, you must generate a password for the user, and supply the user with this password and your organization's unique AWS console login URL. * You have not applied the "log in from console" policy document to the user. You must apply this first so that they can log in.

*You cannot log in to the AWS console using the Access Key ID / Secret Access Key pair. Instead, you must generate a password for the user, and supply the user with this password and your organization's unique AWS console login URL.*

*You are running an application on an EC2 server, and you are using the instance store to keep all the application data. One day you realize that all the data stored in the instance store is gone. What could be the reason for this?* (Choose two.) * You have rebooted the instance. * You have stopped the instance. * You have terminated the instance. * You have configured an ELB with the EC2 server.

*You have stopped the instance* * You have terminated the instance* When you store something in the instance store, the data remains in the instance for the instance's lifetime. When you stop the instance, that instance is gone. Similarly, when you terminate the instance, it is gone, and you lose all your data. ------------------------- Only when you reboot or restart the instance does the instance remain active, and thus all the data is stored. As a best practice, you should never store important data in the instance store and should always use the EBS volume to store the important data. It does not matter if you add or delete an ELB with the instance; it does not delete the data.

*To save costs, you shut down some of your EC2 servers over the weekend that were used to deploy your application. One Monday when you try the application, you realize that you are not able to start the application since all the data that was stored in the application is gone. What could be the reason for this?* * You forgot to connect to the EBS volume after restarting the EC2 servers on Monday. * There must be a failure in the EBS volume for all the EC2 servers. * You must have used the instance store for storing all the data. * Someone must have edited your /etc/fstab parameter file.

*You must have used the instance store for storing all the data.* Instance storage is ephemeral, and all the data in that storage is gone the moment you shut down the server. ----------------------------------- Even if you attached an EBS volume with an EC2 instance, once configured, there is no need to manually connect every time after the server restart. There can't be an EBS failure for all the EC2 servers. Changing a few parameters in /etc/fstab won't delete your data.

*You want to deploy a PCI-compliant application on AWS. You will be deploying your application on EC2 servers and will be using RDS to host your database. You have read that AWS services, which you are going to use, are PCI compliant. What steps do you need to take to make the application PCI compliant?* * Nothing. Since AWS is PCI compliant, you don't have to do anything. * Encrypt the database, which will make sure the application is PCI compliant. * Encrypt the database and EBS volume from the EC2 server. * You need to follow all the steps as per the PCI requirements, from the application and to the database,to make the application compliant.

*You need to follow all the steps as per the PCI requirements, from the application and to the database, to make the application compliant.* AWS follows the shared security model. In this model, AWS is responsible for the security in the cloud. Just putting your application on AWS won't make it PCI compliant. You need to do your part as well. ------------------------- You need to follow all the steps as per the PCI documentation to make your application PCI compliant; you can't just encrypt the database and application and be done with it.

*You have a production application that is on the largest RDS instance possible, and you are still approaching CPU utilization bottlenecks. You have implemented read replicas, ElastiCache, and even CloudFront and S3 to cache static assets, but you are still bottlenecking. What should your next troubleshooting step be?* * You should provision a secondary RDS instance and then implement and ELB to spread the load between the two RDS instances. * You should consider using RDS Multi-AZ and using the secondary AZ nodes as read only nodes to further offset load. * You have reached the limits of public cloud. You should get dedicated database server and host this locally within your own data center. * You should implement database partitioning and spread your data across multiple DB Instances.

*You should implement database partitioning and spread your data across multiple DB Instances.* If your application requires more compute resources than the largest DB instance class or more storage than the maximum allocation, you can implement partitioning, thereby spreading your data across multiple DB instances.

*You are a solutions architect working for a large engineering company who are moving from a legacy infrastructure to AWS. You have configured the company's first AWS account and you have set up IAM. Your company is based in Andorra, but there will be a small subsidiary operating out of South Korea, so that office will need its own AWS environment. Which of the following statements is true?* * You will then need to configure Users and Policy Documents for each region respectively. * You will need to configure your users regionally, however your policy documents are global. * You will need to configure your policy documents regionally, however your users are global. * You will need to configure Users and Policy Documents only once, as these are applied globally.

*You will need to configure Users and Policy Documents only once, as these are applied globally.*

*The use of a cluster placement group is ideal _______* * When you need to distribute content on a CDN network. * Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone. * When you need to deploy EC2 instances that require high disk IO. * Your fleet of EC2 Instances requires low latency and high network throughput across multiple availability zones.

*Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone.* Cluster Placement Groups are primarily about keeping you compute resources within one network hop of each other on high speed rack switches. This is only helpful when you have compute loads with network loads that are either very high or very sensitive to latency.

*From the command line, which of the following should you run to get the public hostname of an EC2 instance?* * curl http://254.169.254.169/latest/user/data/public-hostname * curl http://169.254.169.254/latest/meta-data/public-hostname * curl http://254.169.254.169/latest/meta-data/public-hostname * curl http://169.254.169.254/latest/user-data/public-hostname

*curl http://169.254.169.254/latest/meta-data/public-hostname* You would use the command: curl http://169.254.169.254/latest/meta-data/public-hostname

*The difference between S3 and EBS is that EBS is object based where as S3 is block based.* * true * false

*false*

*To retrieve instance metadata or user data you will need to use the following IP Address:* * http://169.254.169.254 * http://10.0.0.1 * http://127.0.0.1 * http://192.168.0.254

*http://169.254.169.254*

*You have been asked by your company to create an S3 bucket with the name "acloudguru1234" in the EU West region. What would the URL for this bucket be?* * https://s3-eu-west-1.amazonaws.com/acloudguru1234 * https://s3.acloudguru1234.amazonaws.com/eu-west-1 * https://s3-us-east-1.amazonaws.com/acloudguru1234 * https://s3-acloudguru1234.amazonaws.com/

*https://s3-eu-west-1.amazonaws.com/acloudguru1234*

*S3 has eventual consistency for which HTTP Methods?* * UPDATES and DELETES * overwrite PUTS and DELETES * PUTS of new Objects and DELETES * PUTS of new objects and UPDATES

*overwrite PUTS and DELETES*

Which DNS record can be used to store human-readable information about a server, network, and other accounting data with a host? 1. A TXT record 2. <p class="Option"><span lang="EN-US">An MX record</span> 3. <p class="Option"><span lang="EN-US">An SPF record</span> 4. A PTR record

1 <p class="Answer"><strong><span lang="EN-US">A.</span></strong><br><p class="Explanation"><span lang="EN-US">A TXT record is used to store arbitrary and unformatted text with a host.</span>

<p class="Question"><span lang="EN-US">Your company works with data that requires frequent audits of your AWS environment to ensure compliance with internal policies and best practices. In order to perform these audits, you need access to historical configurations of your resources to evaluate relevant configuration changes. Which service will provide the necessary information for your audits?</span> 1. AWS Config 2. AWS Key Management Service (AWS KMS) 3. <p class="Option"><span lang="EN-US">AWS CloudTrail</span> 4. <p class="Option"><span lang="EN-US">AWS OpsWorks</span>

1 <p class="Answer"><strong><span lang="EN-US">A.</span></strong><br><p class="Explanation"><span lang="EN-US">AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config, you can discover existing and deleted AWS resources, determine your overall compliance against rules, and dive into configuration details of a resource at any point in time. These capabilities enable compliance auditing.</span>

When you create a new Amazon Simple Notification Service (Amazon SNS) topic, which of the following is created automatically? 1. An Amazon Resource Name (ARN) 2. A subscriber 3. An Amazon Simple Queue Service (Amazon SQS) queue to deliver your Amazon SNS topic 4. A message

1 <p class="Answer"><strong><span lang="EN-US">A.</span></strong><br><p class="Explanation"><span lang="EN-US">When you create a new Amazon SNS topic, an Amazon ARN is created automatically.</span>

Which security scheme is used by the AWS Multi-Factor Authentication (AWS MFA) token? 1. Time-Based One-Time Password (TOTP) 2. Perfect Forward Secrecy (PFC) 3. Ephemeral Diffie Hellman (EDH) 4. Split-Key Encryption (SKE)

1 <strong>A.</strong><br>A virtual MFA device uses a software application that generates six-digit authentication codes that are compatible with the TOTP standard, as described in RFC 6238.

<p class="Question"><span lang="EN-US">Which AWS service records Application Program Interface (API) calls made on your account and delivers log files to your Amazon Simple Storage Service (Amazon S3) bucket?<o:p></o:p></span> 1. <p class="Option"><span lang="EN-US">AWS CloudTrail<o:p></o:p></span> 2. <p class="Option"><span lang="EN-US">Amazon CloudWatch<o:p></o:p></span> 3. Amazon Kinesis 4. AWS Data Pipeline

1 <strong>A.</strong><br>AWS CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS Cloud service.

AWS provides IT control information to customers in which of the following ways? 1. By using specific control definitions or through general control standard compliance 2. By using specific control definitions or through SAS 70 3. By using general control standard compliance and by complying with ISO 27001 4. By complying with ISO 27001 and SOC 1 Type II

1 <strong>A.</strong><br>AWS provides IT control information to customers through either specific control definitions or general control standard compliance.

Which Amazon Relational Database Service (Amazon RDS) database engines support Multi-AZ? 1. All of them 2. Microsoft SQL Server, MySQL, and Oracle 3. Oracle, Amazon Aurora, and PostgreSQL 4. MySQL

1 <strong>A.</strong><br>All Amazon RDS database engines support Multi-AZ deployment.

You are changing your application to move session state information off the individual Amazon Elastic Compute Cloud (Amazon EC2) instances to take advantage of the elasticity and cost benefits provided by Auto Scaling. Which of the following AWS Cloud services is best suited as an alternative for storing session state information? 1. Amazon DynamoDB 2. Amazon Redshift 3. Amazon Storage Gateway 4. Amazon Kinesis

1 <strong>A.</strong><br>Amazon DynamoDB is a NoSQL database store that is a great choice as an alternative due to its scalability, high-availability, and durability characteristics. Many platforms provide open-source, drop-in replacement libraries that allow you to store native sessions in Amazon DynamoDB. Amazon DynamoDB is a great candidate for a session storage solution in a share-nothing, distributed architecture.

Your company runs an Amazon Elastic Compute Cloud (Amazon EC2) instance periodically to perform a batch processing job on a large and growing filesystem. At the end of the batch job, you shut down the Amazon EC2 instance to save money but need to persist the filesystem on the Amazon EC2 instance from the previous batch runs. What AWS Cloud service can you leverage to meet these requirements? 1. Amazon Elastic Block Store (Amazon EBS) 2. Amazon DynamoDB 3. Amazon Glacier 4. AWS CloudFormation

1 <strong>A.</strong><br>Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances on the AWS Cloud. Amazon DynamoDB, Amazon Glacier, and AWS CloudFormation do not provide persistent block-level storage for Amazon EC2 instances. Amazon DynamoDB provides managed NoSQL databases. Amazon Glacier provides low-cost archival storage. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.

You are working for a small organization without a dedicated database administrator on staff. You need to install Microsoft SQL Server Enterprise edition quickly to support an accounting back office application on Amazon Relational Database Service (Amazon RDS). What should you do? 1. Launch an Amazon RDS DB Instance, and select Microsoft SQL Server Enterprise Edition under the Bring Your Own License (BYOL) model. 2. Provision SQL Server Enterprise Edition using the License Included option from the Amazon RDS Console. 3. SQL Server Enterprise edition is only available via the Command Line Interface (CLI). Install the command-line tools on your laptop, and then provision your new Amazon RDS Instance using the CLI. 4. You cannot use SQL Server Enterprise edition on Amazon RDS. You should install this on to a dedicated Amazon Elastic Compute Cloud (Amazon EC2) Instance.

1 <strong>A.</strong><br>Amazon RDS supports Microsoft SQL Server Enterprise edition and the license is available only under the BYOL model.

Which AWS Cloud service is best suited for Online Analytics Processing (OLAP)? 1. Amazon Redshift 2. Amazon Relational Database Service (Amazon RDS) 3. Amazon Glacier 4. Amazon DynamoDB

1 <strong>A.</strong><br>Amazon Redshift is best suited for traditional OLAP transactions. While Amazon RDS can also be used for OLAP, Amazon Redshift is purpose-built as an OLAP data warehouse.

Which encryption algorithm is used by Amazon Simple Storage Service (Amazon S3) to encrypt data at rest with Service-Side Encryption (SSE)? 1. Advanced Encryption Standard (AES)-256 2. RSA 1024 3. RSA 2048 4. AES-128

1 <strong>A.</strong><br>Amazon S3 SSE uses one of the strongest block ciphers available, 256-bit AES.

A week before Cyber Monday last year, your corporate data center experienced a failed air conditioning unit that caused flooding into the server racks. The resulting outage cost your company significant revenue. Your CIO mandated a move to the cloud, but he is still concerned about catastrophic failures in a data center. What can you do to alleviate his concerns? 1. Distribute the architecture across multiple Availability Zones. 2. Use an Amazon Virtual Private Cloud (Amazon VPC) with subnets. 3. Launch the compute for the processing services in a placement group. 4. Purchase Reserved Instances for the processing services instances.

1 <strong>A.</strong><br>An Availability Zone consists of one or more physical data centers. Availability Zones within a region provide inexpensive, low-latency network connectivity to other zones in the same region. This allows you to distribute your application across data centers. In the event of a catastrophic failure in a data center, the application will continue to handle requests.

You want to host multiple Hypertext Transfer Protocol Secure (HTTPS) websites on a fleet of Amazon EC2 instances behind an Elastic Load Balancing load balancer with a single X.509 certificate. How must you configure the Secure Sockets Layer (SSL) certificate so that clients connecting to the load balancer are not presented with a warning when they connect? 1. Create one SSL certificate with a Subject Alternative Name (SAN) value for each website name. 2. Create one SSL certificate with the Server Name Indication (SNI) value checked. 3. Create multiple SSL certificates with a SAN value for each website name. 4. Create SSL certificates for each Availability Zone with a SAN value for each website name.

1 <strong>A.</strong><br>An SSL certificate must specify the name of the website in either the subject name or listed as a value in the SAN extension of the certificate in order for connecting clients to not receive a warning.

Your website is hosted on a fleet of web servers that are load balanced across multiple Availability Zones using an Elastic Load Balancer (ELB). What type of record set in Amazon Route 53 can be used to point myawesomeapp.com to your website? 1. Type A Alias resource record set 2. MX record set 3. TXT record set 4. CNAME record set

1 <strong>A.</strong><br>An alias resource record set can point to an ELB. You cannot create a CNAME record at the top node of a Domain Name Service (DNS) namespace, also known as the zone apex, as the case in this example. Alias resource record sets can save you time because Amazon Route 53 automatically recognizes changes in the resource record sets to which the alias resource record set refers.

Each AWS region is composed of two or more locations that offer organizations the ability to operate production systems that are more highly available, fault tolerant, and scalable than would be possible using a single data center. What are these locations called? 1. Availability zones 2. Replication areas 3. Geographic districts 4. Compute centers

1 <strong>A.</strong><br>An availability zone is a distinct location within a region that is insulated from failures in other availability zones and provides inexpensive, low-latency network connectivity to other availability zones in the same region. Replication areas, geographic districts, and compute centers are not terms used to describe AWS data center locations.

Which feature of AWS is designed to permit calls to the platform from an Amazon Elastic Compute Cloud (Amazon EC2) instance without needing access keys placed on the instance? 1. AWS Identity and Access Management (IAM) instance profile 2. IAM groups 3. IAM roles 4. Amazon EC2 key pairs

1 <strong>A.</strong><br>An instance profile is a container for an IAM role that you can use to pass role information to an Amazon EC2 instance when the instance starts.

Your company experiences fluctuations in traffic patterns to their e-commerce website based on flash sales. What service can help your company dynamically match the required compute capacity to the spike in traffic during flash sales? 1. Auto Scaling 2. Amazon Glacier 3. Amazon Simple Notification Service (Amazon SNS) 4. Amazon Virtual Private Cloud (Amazon VPC)

1 <strong>A.</strong><br>Auto Scaling helps maintain application availability and allows organizations to scale Amazon Elastic Compute Cloud (Amazon EC2) capacity up or down automatically according to conditions defined for the particular workload. Not only can it be used to help ensure that the desired number of Amazon EC2 instances are running, but it also allows resources to scale in and out to match the demands of dynamic workloads. Amazon Glacier, Amazon SNS, and Amazon VPC do not provide services to scale compute capacity automatically.

Which of the following statements is true? 1. IT governance is still the customer&rsquo;s responsibility, despite deploying their IT estate onto the AWS platform. 2. The AWS platform is PCI DSS-compliant to Level 1. Customers can deploy their web applications to this platform, and they will be PCI DSS-compliant automatically. 3. The shared responsibility model applies to IT security only; it does not relate to governance. 4. AWS doesn&rsquo;t take risk management very seriously, and it&rsquo;s up to the customer to mitigate risks to the AWS infrastructure.

1 <strong>A.</strong><br>IT governance is still the customer&rsquo;s responsibility.

Your company provides media content via the Internet to customers through a paid subscription model. You leverage Amazon CloudFront to distribute content to your customers with low latency. What approach can you use to serve this private content securely to your paid subscribers? 1. Provide signed Amazon CloudFront URLs to authenticated users to access the paid content. 2. Use HTTPS requests to ensure that your objects are encrypted when Amazon CloudFront serves them to viewers. 3. Configure Amazon CloudFront to compress the media files automatically for paid subscribers. 4. Use the Amazon CloudFront geo restriction feature to restrict access to all of the paid subscription media at the country level.

1 <strong>A.</strong><br>Many companies that distribute content via the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, such as users who have paid a fee. To serve this private content securely using Amazon CloudFront, you can require that users access your private content by using special Amazon CloudFront-signed URLs or signed cookies.

Of the following options, what is an efficient way to fanout a single Amazon Simple Notification Service (Amazon SNS) message to multiple Amazon Simple Queue Service (Amazon SQS) queues? 1. Create an Amazon SNS topic using Amazon SNS. Then create and subscribe multiple Amazon SQS queues sent to the Amazon SNS topic. 2. Create one Amazon SQS queue that subscribes to multiple Amazon SNS topics. 3. Amazon SNS allows exactly one subscriber to each topic, so fanout is not possible. 4. Create an Amazon SNS topic using Amazon SNS. Create an application that subscribes to that topic and duplicates the message. Send copies to multiple Amazon SQS queues.

1 <strong>A.</strong><br>Multiple queues can subscribe to an Amazon SNS topic, which can enable parallel asynchronous processing.

Which of the following is an optional security control that can be applied at the subnet layer of a VPC? 1. Network ACL 2. Security Group 3. Firewall 4. Web application firewall

1 <strong>A.</strong><br>Network ACLs are associated to a VPC subnet to control traffic flow.

Using the correctly decrypted Administrator password and RDP, you cannot log in to a Windows instance you just launched. Which of the following is a possible reason? 1. There is no security group rule that allows RDP access over port 3389 from your IP address. 2. The instance is a Reserved Instance. 3. The instance is not using enhanced networking. 4. The instance is not an Amazon EBS-optimized instance.

1 <strong>A.</strong><br>None of the other options will have any effect on the ability to connect.

Which of the following are the minimum required elements to create an Auto Scaling launch configuration? 1. Launch configuration name, Amazon Machine Image (AMI), and instance type 2. Launch configuration name, AMI, instance type, and key pair 3. Launch configuration name, AMI, instance type, key pair, and security group 4. Launch configuration name, AMI, instance type, key pair, security group, and block device mapping

1 <strong>A.</strong><br>Only the launch configuration name, AMI, and instance type are needed to create an Auto Scaling launch configuration. Identifying a key pair, security group, and a block device mapping are optional elements for an Auto Scaling launch configuration.

How many nodes can you add to an Amazon ElastiCache cluster running Redis? 1. 1 2. 5 3. 20 4. 100

1 <strong>A.</strong><br>Redis clusters can only contain a single node; however, you can group multiple clusters together into a replication group.

Which protocol is used by DNS when response data size exceeds 512 bytes? 1. Transmission Control Protocol (TCP) 2. Hyper Text Transfer Protocol (HTTP) 3. <p class="Option"><span lang="EN-US">File Transfer Protocol (FTP)</span> 4. User Datagram Protocol (UDP)

1 <strong>A.</strong><br>The TCP protocol is used by DNS server when the response data size exceeds 512 bytes or for tasks such as zone transfers.

What is the default limit for the number of Amazon VPCs that a customer may have in a region? 1. 5 2. 6 3. 7 4. There is no default maximum number of VPCs within a region.

1 <strong>A.</strong><br>The default limit for the number of Amazon VPCs that a customer may have in a region is 5.

What is the default time for an Amazon Simple Queue Service (Amazon SQS) visibility timeout? 1. 30 seconds 2. 60 seconds 3. 1 hour 4. 12 hours

1 <strong>A.</strong><br>The default time for an Amazon SQS visibility timeout is 30 seconds.

What is the maximum size IP address range that you can have in an Amazon VPC? 1. /16 2. /24 3. /28 4. /30

1 <strong>A.</strong><br>The maximum size subnet that you can have in a VPC is /16.

You are creating an Amazon DynamoDB table that will contain messages for a social chat application. This table will have the following attributes: Username (String), Timestamp (Number), Message (String). Which attribute should you use as the partition key? The sort key? 1. Username, Timestamp 2. Username, Message 3. Timestamp, Message 4. Message, Timestamp

1 <strong>A.</strong><br>Using the Username as a partition key will evenly spread your users across the partitions. Messages are often filtered down by time range, so Timestamp makes sense as a sort key.

You have built a large web application that uses Amazon ElastiCache using Memcached to store frequent query results. You plan to expand both the web fleet and the cache fleet multiple times over the next year to accommodate increased user traffic. How do you minimize the amount of changes required when a scaling event occurs? 1. Configure AutoDiscovery on the client side 2. Configure AutoDiscovery on the server side 3. Update the configuration file each time a new cluster 4. Use an Elastic Load Balancer to proxy the requests

1 <strong>A.</strong><br>When the clients are configured to use AutoDiscovery, they can discover new cache nodes as they are added or removed. AutoDiscovery must be configured on each client and is not active server side. Updating the configuration file each time will be very difficult to manage. Using an Elastic Load Balancer is not recommended for this scenario.

To help prevent data loss due to the failure of any single hardware component, Amazon Elastic Block Storage (Amazon EBS) automatically replicates EBS volume data to which of the following? 1. Amazon EBS replicates EBS volume data within the same Availability Zone in a region. 2. Amazon EBS replicates EBS volume data across other Availability Zones within the same region. 3. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in one other region. 4. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in every other region.

1 <strong>A.</strong><br>When you create an Amazon EBS volume in an Availability Zone, it is automatically replicated within that Availability Zone to prevent data loss due to failure of any single hardware component. An EBS Snapshot creates a copy of an EBS volume to Amazon S3 so that copies of the volume can reside in different Availability Zones within a region.

What happens when you create a new Amazon VPC? 1. A main route table is created by default. 2. Three subnets are created by default&mdash;one for each Availability Zone. 3. Three subnets are created by default in one Availability Zone. 4. An IGW is created by default.

1 <strong>A.</strong><br>When you create an Amazon VPC, a route table is created by default. You must manually create subnets and an IGW.

A firm is moving its testing platform to AWS to provide developers with instant access to clean test and development environments. The primary requirement for the firm is to make environments easily reproducible and fungible. What service will help the firm meet their requirements? 1. AWS CloudFormation 2. <p class="Option"><span lang="EN-US">AWS Config</span> 3. <p class="Option"><span lang="EN-US">Amazon Redshift</span> 4. AWS Trusted Advisor

1 <strong>A.</strong><br>With AWS CloudFormation, you can reuse your template to set up your resources consistently and repeatedly. Just describe your resources once and then provision the same resources over and over in multiple stacks

Your team is building an order processing system that will span multiple Availability Zones. During testing, the team wanted to test how the application will react to a database failover. How can you enable this type of test? 1. Force a Multi-AZ failover from one Availability Zone to another by rebooting the primary instance using the Amazon RDS console. 2. Terminate the DB instance, and create a new one. Update the connection string. 3. Create a support case asking for a failover. 4. It is not possible to test a failover.

1 <strong>A.</strong><br>You can force a failover from one Availability Zone to another by rebooting the primary instance in the AWS Management Console. This is often how people test a failover in the real world. There is no need to create a support case.

How many IGWs can you attach to an Amazon VPC at any one time? 1. 1 2. 2 3. 3 4. 4

1 <strong>A.</strong><br>You may only have one IGW for each Amazon VPC.

Which cache engines does Amazon ElastiCache support? (Choose 2 answers) 1. Memcached 2. Redis 3. Membase 4. Couchbase

1,2 <strong>A,B.</strong><br>Amazon ElastiCache supports both Memcached and Redis. Amazon ElastiCache supports both Memcached and Redis. You can run self-managed installations of Membase and Couchbase using Amazon Elastic Compute Cloud (Amazon EC2).

You are building a large order processing system and are responsible for securing the database. Which actions will you take to protect the data? (Choose 3 answers) 1. Adjust AWS Identity and Access Management (IAM) permissions for administrators. 2. Configure security groups and network Access Control Lists (ACLs) to limit network access. 3. Configure database users, and grant permissions to database objects. 4. Install anti-virus software on the Amazon RDS DB Instance.

1,2,3 <strong>A,B,C.</strong><br />Protecting your database requires a multilayered approach that secures the infrastructure, the network, and the database itself. Amazon RDS is a managed service and direct access to the OS is not available.

AWS communicates with customers regarding its security and control environment through a variety of different mechanisms. Which of the following are valid mechanisms? (Choose three) 1. Obtaining industry certifications and independent third-party attestations 2. Publishing information about security and AWS control practices via the website, whitepapers, and blogs 3. Directly providing customers with certificates, reports, and other documentation (under NDA in some cases) 4. Allowing customers' auditors direct access to AWS data centers, infrastructure, and senior staff

1,2,3 <strong>A,B,C.</strong><br>Answers A through C describe valid mechanisms that AWS uses to communicate with customers regarding its security and control environment. AWS does not allow customers' auditors direct access to AWS data centers, infrastructure, or staff.

In Amazon Simple Workflow Service (Amazon SWF), which of the following are actors? (Choose 3 answers) 1. Activity workers 2. Workflow starters 3. Deciders 4. Activity tasks

1,2,3 <strong>A,B,C.</strong><br>In Amazon SWF, actors can be activity workers, workflow starters, or deciders.

Which of the following objects are good candidates to store in a cache? (Choose 3 answers) 1. Session state 2. Shopping cart 3. Product catalog 4. Bank account balance

1,2,3 <strong>A,B,C.</strong><br>Many types of objects are good candidates to cache because they have the potential to be accessed by numerous users repeatedly. Even the balance of a bank account could be cached for short periods of time if the back-end database query is slow to respond.

Your team manages a popular website running Amazon Relational Database Service (Amazon RDS) MySQL backend. The Marketing department has just informed you about an upcoming television commercial that will drive thousands of new visitors to the website. How can you prepare your database to handle the load? (Choose 3 answers) 1. Vertically scale the DB Instance by selecting a more powerful instance class. 2. Create read replicas to offload read requests and update your application. 3. Upgrade the storage from Magnetic volumes to General Purpose Solid State Drive (SSD) volumes. 4. Upgrade to Amazon Redshift for faster columnar storage.

1,2,3 <strong>A,B,C.</strong><br>Vertically scaling up is one of the simpler options that can give you additional processing power without making any architectural changes. Read replicas require some application changes but let you scale processing power horizontally. Finally, busy databases are often I/O- bound, so upgrading storage to General Purpose (SSD) or Provisioned IOPS (SSD) can often allow for additional request processing.

Your AWS account administrator left your company today. The administrator had access to the root user and a personal IAM administrator account. With these accounts, he generated other IAM accounts and keys. Which of the following should you do today to protect your AWS infrastructure? (Choose 4 answers) 1. Change the password and add MFA to the root user. 2. Put an IP restriction on the root user. 3. Rotate keys and change passwords for IAM accounts. 4. Delete all IAM accounts. 5. Delete the administrator's personal IAM account. 6. Relaunch all Amazon EC2 instances with new roles.

1,2,3,5 <strong>A,B,C,E.</strong><br>Locking down your root user and all accounts to which the administrator had access is the key here. Deleting all IAM accounts is not necessary, and it would cause great disruption to your operations. Amazon EC2 roles use temporary security tokens, so relaunching Amazon EC2 instances is not necessary.

What must be done to host a static website in an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 3 answers) 1. Configure the bucket for static hosting and specify an index and error document. 2. Create a bucket with the same name as the website. 3. Enable File Transfer Protocol (FTP) on the bucket. 4. Make the objects in the bucket world-readable. 5. Enable HTTP on the bucket.

1,2,4 <strong>A,B,D.</strong><br>A, B, and D are required, and normally you also set a friendly CNAME to the bucket URL. Amazon S3 does not support FTP transfers, and HTTP does not need to be enabled.

What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)? (Choose 3 answers) 1. All objects have a URL. 2. Amazon S3 can store unlimited amounts of data. 3. Objects are world-readable by default. 4. Amazon S3 uses a REST (Representational State Transfer) Application Program Interface (API). 5. You must pre-allocate the storage in a bucket.

1,2,4 <strong>A,B,D.</strong><br>C and E are incorrect—objects are private by default, and storage in a bucket does not need to be pre-allocated.

You are creating a High-Performance Computing (HPC) cluster and need very low latency and high bandwidth between instances. What combination of the following will allow this? (Choose 3 answers) 1. Use an instance type with 10 Gbps network performance. 2. Put the instances in a placement group. 3. Use Dedicated Instances. 4. Enable enhanced networking on the instances. 5. Use Reserved Instances.

1,2,4 <strong>A,B,D.</strong><br>The other answers have nothing to do with networking.

Which of the following is a valid report, certification, or third-party attestation for AWS? (Choose three) 1. SOC 1 2. PCI DSS Level 1 3. SOC 4 4. ISO 27001

1,2,4 <strong>A,B,D.</strong><br>There is no such thing as a SOC 4 report, therefore answer C is incorrect.

Which of the following are features of enhanced networking? (Choose 3 answers) 1. More Packets Per Second (PPS) 2. Lower latency 3. Multiple network interfaces 4. Border Gateway Protocol (BGP) routing 5. Less jitter

1,2,5 <strong>A,B,E.</strong><br>These are the benefits of enhanced networking.

You need to implement a service to scan Application Program Interface (API) calls and related events&rsquo; history to your AWS account. This service will detect things like unused permissions, overuse of privileged accounts, and anomalous logins. Which of the following AWS Cloud services can be leveraged to implement this service? (Choose 3 answers) 1. AWS CloudTrail 2. Amazon Simple Storage Service (Amazon S3) 3. Amazon Route 53 4. Auto Scaling 5. AWS Lambda

1,2,5 <strong>A,B,E.</strong><br>You can enable AWS CloudTrail in your AWS account to get logs of API calls and related events' history in your account. AWS CloudTrail records all of the API access events as objects in an Amazon S3 bucket that you specify at the time you enable AWS CloudTrail. You can take advantage of Amazon S3's bucket notification feature by directing Amazon S3 to publish object-created events to AWS Lambda. Whenever AWS CloudTrail writes logs to your Amazon S3 bucket, Amazon S3 can then invoke your AWS Lambda function by passing the Amazon S3 object-created event as a parameter. The AWS Lambda function code can read the log object and process the access records logged by AWS CloudTrail.

What properties of an Amazon VPC must be specified at the time of creation? (Choose 2 answers) 1. The CIDR block representing the IP address range 2. One or more subnets for the Amazon VPC 3. The region for the Amazon VPC 4. Amazon VPC Peering relationships

1,3 <strong>A, C.</strong><br>A, C &ndash; The CIDR block is specified upon creation and cannot <a name="_GoBack"></a>be changed. An Amazon VPC is associated with exactly one region which must be specified upon creation. You can add a subnet to an Amazon VPC any time after it has been created, provided its address range falls within the Amazon VPC CIDR block and does not overlap with the address range of any existing CIDR block. You can set up peering relationships between Amazon VPCs after they have been created.

Which of the following are good use cases for Amazon CloudFront? (Choose 2 answers) 1. A popular software download site that supports users around the world, with dynamic content that changes rapidly 2. A corporate website that serves training videos to employees. Most employees are located in two corporate campuses in the same city. 3. A heavily used video and music streaming service that requires content to be delivered only to paid subscribers 4. A corporate HR website that supports a global workforce. Because the site contains sensitive data, all users must connect through a corporate Virtual Private Network (VPN).

1,3 <strong>A, C.</strong><br>The site in A is &ldquo;popular&rdquo; and supports &ldquo;users around the world,&rdquo; key indicators that CloudFront is appropriate. Similarly, the site in C is &ldquo;heavily used,&rdquo; and requires private content, which is supported by Amazon CloudFront. Both B and D are corporate use cases where the requests come from a single geographic location or appear to come from one (because of the VPN). These use cases will generally not see benefit from Amazon CloudFront.

When building a Distributed Denial of Service (DDoS)-resilient architecture, how does Amazon Virtual Private Cloud (Amazon VPC) help minimize the attack surface area? (Choose 2 answers) 1. Reduces the number of necessary Internet entry points 2. Combines end user traffic with management traffic 3. Obfuscates necessary Internet entry points to the level that untrusted end users cannot access them 4. Adds non-critical Internet entry points to the architecture 5. Scales the network to absorb DDoS attacks

1,3 <strong>A,C.</strong><br />The attack surface is composed of the different Internet entry points that allow access to your application. The strategy to minimize the attack surface area is to (a) reduce the number of necessary Internet entry points, (b) eliminate non-critical Internet entry points, (c) separate end user traffic from management traffic, (d) obfuscate necessary Internet entry points to the level that untrusted end users cannot access them, and (e) decouple Internet entry points to minimize the effects of attacks. This strategy can be accomplished with Amazon VPC.

With regard to vulnerability scans and threat assessments of the AWS platform, which of the following statements are true? (Choose two.) 1. AWS regularly performs scans of public-facing endpoint IP addresses for vulnerabilities. 2. Scans performed by AWS include customer instances. 3. AWS security notifies the appropriate parties to remediate any identified vulnerabilities. 4. Customers can perform their own scans at any time without advance notice.

1,3 <strong>A,C.</strong><br>AWS regularly scans public-facing, non-customer endpoint IP addresses and notifies appropriate parties. AWS does not scan customer instances, and customers must request the ability to perform their own scans in advance, therefore answers A and C are correct.

Which of the following are IAM security features? (Choose 2 answers) 1. Password policies 2. Amazon DynamoDB global secondary indexes 3. MFA 4. Consolidated Billing

1,3 <strong>A,C.</strong><br>Amazon DynamoDB global secondary indexes are a performance feature of Amazon DynamoDB; Consolidated Billing is an accounting feature allowing all bills to roll up under a single account. While both are very valuable features, neither is a security feature.

Amazon Glacier is well-suited to data that is which of the following? (Choose 2 answers) 1. Is infrequently or rarely accessed 2. Must be immediately available when needed 3. Is available after a three- to five-hour restore period 4. Is frequently erased within 30 days

1,3 <strong>A,C.</strong><br>Amazon Glacier is optimized for long-term archival storage and is not suited to data that needs immediate access or short-lived data that is erased within 90 days.

An Auto Scaling group may use: (Choose 2 answers) 1. On-Demand Instances 2. Stopped instances 3. Spot Instances 4. On-premises instances 5. Already running instances if they use the same Amazon Machine Image (AMI) as the Auto Scaling group's launch configuration and are not already part of another Auto Scaling group

1,3 <strong>A,C.</strong><br>An Auto Scaling group may use On-Demand and Spot Instances. An Auto Scaling group may not use already stopped instances, instances running someplace other than AWS, and already running instances not started by the Auto Scaling group itself.

You have a workload that requires 15,000 consistent IOPS for data that must be durable. What combination of the following steps do you need? (Choose 2 answers) 1. Use an Amazon Elastic Block Store (Amazon EBS)-optimized instance. 2. Use an instance store. 3. Use a Provisioned IOPS SSD volume. 4. Use a magnetic volume.

1,3 <strong>A,C.</strong><br>B and D are incorrect because an instance store will not be durable and a magnetic volume offers an average of 100 IOPS. Amazon EBS-optimized instances reserve network bandwidth on the instance for IO, and Provisioned IOPS SSD volumes provide the highest consistent IOPS.

Which of the following are found in an IAM policy? (Choose 2 answers) 1. Service Name 2. Region 3. Action 4. Password

1,3 <strong>A,C.</strong><br>IAM policies are independent of region, so no region is specified in the policy. IAM policies are about authorization for an already-authenticated principal, so no password is needed.

Which of the following are features of Amazon Elastic Block Store (Amazon EBS)? (Choose 2 answers) 1. Data stored on Amazon EBS is automatically replicated within an Availability Zone. 2. Amazon EBS data is automatically backed up to tape. 3. Amazon EBS volumes can be encrypted transparently to workloads on the attached instance. 4. Data on an Amazon EBS volume is lost when the attached instance is stopped.

1,3 <strong>A,C.</strong><br>There are no tapes in the AWS infrastructure. Amazon EBS volumes persist when the instance is stopped. The data is automatically replicated within an availability zone. Amazon EBS volumes can be encrypted upon creation and used by an instance in the same manner as if they were not encrypted.

You have purchased an m3.xlarge Linux Reserved instance in us-east-1a. In which ways can you modify this reservation? (Choose 2 answers) 1. Change it into two m3.large instances. 2. Change it to a Windows instance. 3. Move it to us-east-1b. 4. Change it to an m4.xlarge.

1,3 <strong>A,C.</strong><br>You can change the instance type only within the same instance type family, or you can change the availability zone. You cannot change the operating system nor the instance type family.

Elastic Load Balancing health checks may be: (Choose 3 answers) 1. A ping 2. A key pair verification 3. A connection attempt 4. A page request 5. An Amazon Elastic Compute Cloud (Amazon EC2) instance status check

1,3,4 <strong>A,C,D.</strong><br>An Elastic Load Balancing health check may be a ping, a connection attempt, or a page that is checked.

Which of the following techniques can you use to help you meet Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements? (Choose 3 answers) 1. DB snapshots 2. DB option groups 3. Read replica 4. Multi-AZ deployment

1,3,4 <strong>A,C,D.</strong><br>DB snapshots allow you to back up and recover your data, while read replicas and a Multi-AZ deployment allow you to replicate your data and reduce the time to failover.

Your security team is very concerned about the vulnerability of the IAM administrator user accounts (the accounts used to configure all IAM features and accounts). What steps can be taken to lock down these accounts? (Choose 3 answers) 1. Add multi-factor authentication (MFA) to the accounts. 2. Limit logins to a particular U.S. state. 3. Implement a password policy on the AWS account. 4. Apply a source IP address condition to the policy that only grants permissions when the user is on the corporate network. 5. Add a CAPTCHA test to the accounts.

1,3,4 <strong>A,C,D.</strong><br>Neither B nor E are features supported by IAM.

Which of the following are features of Amazon Simple Notification Service (Amazon SNS)? (Choose 3 answers) 1. Publishers 2. Readers 3. Subscribers 4. Topic

1,3,4 <strong>A,C,D.</strong><br>Publishers, subscribers, and topics are the correct answers. You have subscribers to an Amazon SNS topic, not readers.

Which of the following are based on temporary security tokens? (Choose 2 answers) 1. Amazon EC2 roles 2. MFA 3. Root user 4. Federation

1,4 <strong>A,D.</strong><br>Amazon EC2 roles provide a temporary token to applications running on the instance; federation maps policies to identities from other sources via temporary tokens.

Which of the following are required elements of an Auto Scaling group? (Choose 2 answers) 1. Minimum size 2. Health checks 3. Desired capacity 4. Launch configuration

1,4 <strong>A,D.</strong><br>An Auto Scaling group must have a minimum size and a launch configuration defined in order to be created. Health checks and a desired capacity are optional.

Your e-commerce site was designed to be stateless and currently runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. In an effort to control cost and increase availability, you have a requirement to scale the fleet based on CPU and network utilization to match the demand curve for your site. What services do you need to meet this requirement? (Choose 2 answers) 1. Amazon CloudWatch 2. Amazon DynamoDB 3. Elastic Load Balancing 4. Auto Scaling 5. Amazon Simple Storage Service (Amazon S3)

1,4 <strong>A,D.</strong><br>Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to provision Amazon EC2 capacity manually in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average CPU and network utilization of your Amazon EC2 fleet monitored in Amazon CloudWatch is high; similarly, you can set a condition to remove instances in the same increments when CPU and network utilization are low.

What is needed before you can enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers) 1. Enable versioning on the bucket. 2. Enable a lifecycle rule to migrate data to the second region. 3. Enable static website hosting. 4. Create an AWS Identity and Access Management (IAM) policy to allow Amazon S3 to replicate objects on your behalf.

1,4 <strong>A,D.</strong><br>You must enable versioning before you can enable cross-region replication, and Amazon S3 must have IAM permissions to perform the replication. Lifecycle rules migrate data from one storage class to another, not from one bucket to another. Static website hosting is not a prerequisite for replication.

Which of the following AWS cloud services are designed according to the Multi-AZ principle? (Choose 2 answers) 1. Amazon DynamoDB 2. Amazon ElastiCache 3. Elastic Load Balancing 4. Amazon Virtual Private Cloud (Amazon VPC) 5. Amazon Simple Storage Service (Amazon S3)

1,5 <strong>A,E.</strong><br>Amazon DynamoDB runs across AWS proven, high-availability data centers. The service replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Your data is redundantly stored across multiple facilities and multiple devices in each facility. While Elastic Load Balancing and Amazon ElastiCache can be deployed across multiple Availability Zones, you must explicitly take such steps when creating them.

Amazon CloudWatch supports which types of monitoring plans? (Choose 2 answers) 1. Basic monitoring, which is free 2. Basic monitoring, which has an additional cost 3. Ad hoc monitoring, which is free 4. Ad hoc monitoring, which has an additional cost 5. Detailed monitoring, which is free 6. Detailed monitoring, which has an additional cost

1,6 <strong>A,F.</strong><br>Amazon CloudWatch has two plans: basic, which is free, and detailed, which has an additional cost. There is no ad hoc plan for Amazon CloudWatch.

1. AWS communicates with customers regarding its security and control environment through a variety of different mechanisms. Which of the following are valid mechanisms? (Choose 3 answers) A. Obtaining industry certifications and independent third-party attestations B. Publishing information about security and AWS control practices via the website, whitepapers, and blogs C. Directly providing customers with certificates, reports, and other documentation (under NDA in some cases) D. Allowing customers' auditors direct access to AWS data centers, infrastructure, and senior staff

1. A, B, C. Answers A through C describe valid mechanisms that AWS uses to communicate with customers regarding its security and control environment. AWS does not allow customers' auditors direct access to AWS data centers, infrastructure, or staff.

1. Which of the following objects are good candidates to store in a cache? (Choose 3 answers) A. Session state B. Shopping cart C. Product catalog D. Bank account balance

1. A, B, C. Many types of objects are good candidates to cache because they have the potential to be accessed by numerous users repeatedly. Even the balance of a bank account could be cached for short periods of time if the back-end database query is slow to respond.

1. Which of the following are required elements of an Auto Scaling group? (Choose 2 answers) A. Minimum size B. Health checks C. Desired capacity D. Launch configuration

1. A, D. An Auto Scaling group must have a minimum size and a launch configuration defined in order to be created. Health checks and a desired capacity are optional.

1. Which of the following methods will allow an application using an AWS SDK to be authenticated as a principal to access AWS Cloud services? (Choose 2 answers) A. Create an IAM user and store the user name and password for the user in the application's configuration. B. Create an IAM user and store both parts of the access key for the user in the application's configuration. C. Run the application on an Amazon EC2 instance with an assigned IAM role. D. Make all the API calls over an SSL connection.

1. B, C. Programmatic access is authenticated with an access key, not with user names/passwords. IAM roles provide a temporary security token to an application using an SDK.

1. When designing a loosely coupled system, which AWS services provide an intermediate durable storage layer between components? (Choose 2 answers) A. Amazon CloudFront B. Amazon Kinesis C. Amazon Route 53 D. AWS CloudFormation E. Amazon Simple Queue Service (Amazon SQS)

1. B, E. Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data. Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS makes it simple and cost-effective to decouple the components of a cloud application.

1. Which is an operational process performed by AWS for data security? A. Advanced Encryption Standard (AES)-256 encryption of data stored on any shared storage device B. Decommissioning of storage devices using industry-standard practices C. Background virus scans of Amazon Elastic Block Store (Amazon EBS) volumes and Amazon EBS snapshots D. Replication of data across multiple AWS regions E. Secure wiping of Amazon EBS data when an Amazon EBS volume is unmounted

1. B. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

1. Which AWS database service is best suited for traditional Online Transaction Processing (OLTP)? A. Amazon Redshift B. Amazon Relational Database Service (Amazon RDS) C. Amazon Glacier D. Elastic Database

1. B. Amazon RDS is best suited for traditional OLTP transactions. Amazon Redshift, on the other hand, is designed for OLAP workloads. Amazon Glacier is designed for cold archival storage.

1. Which type of record is commonly used to route traffic to an IPv6 address? A. An A record B. A CNAME C. An AAAA record D. An MX record

1. C. An AAAA record is used to route traffic to an IPv6 address, whereas an A record is used to route traffic to an IPv4 address.

1. Your web application needs four instances to support steady traffic nearly all of the time. On the last day of each month, the traffic triples. What is a cost-effective way to handle this traffic pattern? A. Run 12 Reserved Instances all of the time. B. Run four On-Demand Instances constantly, then add eight more On-Demand Instances on the last day of each month. C. Run four Reserved Instances constantly, then add eight On-Demand Instances on the last day of each month. D. Run four On-Demand Instances constantly, then add eight Reserved Instances on the last day of each month.

1. C. Reserved Instances provide cost savings when you can commit to running instances full time, such as to handle the base traffic. On-Demand Instances provide the flexibility to handle traffic spikes, such as on the last day of the month.

1. What is the minimum size subnet that you can have in an Amazon VPC? A. /24 B. /26 C. /28 D. /30

1. C. The minimum size subnet that you can have in an Amazon VPC is /28.

1. In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from block and file storage? (Choose 2 answers) A. Amazon S3 stores data in fixed size blocks. B. Objects are identified by a numbered address. C. Objects can be any size. D. Objects contain both data and metadata. E. Objects are stored in buckets.

1. D, E. Objects are stored in buckets, and objects contain both data and metadata.

1. Which of the following describes a physical location around the world where AWS clusters data centers? A. Endpoint B. Collection C. Fleet D. Region

1. D. A region is a named set of AWS resources in the same geographical area. A region comprises at least two Availability Zones. Endpoint, Collection, and Fleet do not describe a physical location around the world where AWS clusters data centers.

1. Which of the following is not a supported Amazon Simple Notification Service (Amazon SNS) protocol? A. HTTPS B. AWS Lambda C. Email-JSON D. Amazon DynamoDB

1. D. Amazon DynamoDB is not a supported Amazon SNS protocol.

10. Which of the following are features of Amazon Elastic Block Store (Amazon EBS)? (Choose 2 answers) A. Data stored on Amazon EBS is automatically replicated within an Availability Zone. B. Amazon EBS data is automatically backed up to tape. C. Amazon EBS volumes can be encrypted transparently to workloads on the attached instance. D. Data on an Amazon EBS volume is lost when the attached instance is stopped.

10. A, C. There are no tapes in the AWS infrastructure. Amazon EBS volumes persist when the instance is stopped. The data is automatically replicated within an Availability Zone. Amazon EBS volumes can be encrypted upon creation and used by an instance in the same manner as if they were not encrypted.

10. You are working for a small organization without a dedicated database administrator on staff. You need to install Microsoft SQL Server Enterprise edition quickly to support an accounting back office application on Amazon Relational Database Service (Amazon RDS). What should you do? A. Launch an Amazon RDS DB Instance, and select Microsoft SQL Server Enterprise Edition under the Bring Your Own License (BYOL) model. B. Provision SQL Server Enterprise Edition using the License Included option from the Amazon RDS Console. C. SQL Server Enterprise edition is only available via the Command Line Interface (CLI). Install the command-line tools on your laptop, and then provision your new Amazon RDS Instance using the CLI. D. You cannot use SQL Server Enterprise edition on Amazon RDS. You should install this on to a dedicated Amazon Elastic Compute Cloud (Amazon EC2) Instance.

10. A. Amazon RDS supports Microsoft SQL Server Enterprise edition and the license is available only under the BYOL model.

10. What are some reasons to enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers) A. You want a backup of your data in case of accidental deletion. B. You have a set of users or customers who can access the second bucket with lower latency. C. For compliance reasons, you need to store data in a location at least 300 miles away from the first region. D. Your data needs at least five nines of durability.

10. B, C. Cross-region replication can help lower latency and satisfy compliance requirements on distance. Amazon S3 is designed for eleven nines durability for objects in a single region, so a second region does not significantly increase durability. Crossregion replication does not protect against accidental deletion.

10. You are trying to decrypt ciphertext with AWS KMS and the decryption operation is failing. Which of the following are possible causes? (Choose 2 answers) A. The private key does not match the public key in the ciphertext. B. The plaintext was encrypted along with an encryption context, and you are not providing the identical encryption context when calling the Decrypt API. C. The ciphertext you are trying to decrypt is not valid. D. You are not providing the correct symmetric key to the Decrypt API.

10. B, C. Encryption context is a set of key/value pairs that you can pass to AWS KMS when you call the Encrypt, Decrypt, ReEncrypt, GenerateDataKey, and GenerateDataKeyWithoutPlaintext APIs. Although the encryption context is not included in the ciphertext, it is cryptographically bound to the ciphertext during encryption and must be passed again when you call the Decrypt (or ReEncrypt) API. Invalid ciphertext for decryption is plaintext that has been encrypted in a different AWS account or ciphertext that has been altered since it was originally encrypted.

10. You are running a suite of microservices on AWS Lambda that provide the business logic and access to data stored in Amazon DynamoDB for your task management system. You need to create well-defined RESTful Application Program Interfaces (APIs) for these microservices that will scale with traffic to support a new mobile application. What AWS Cloud service can you use to create the necessary RESTful APIs? A. Amazon Kinesis B. Amazon API Gateway C. Amazon Cognito D. Amazon Elastic Compute Cloud (Amazon EC2) Container Registry

10. B. Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a "front door" for applications to access data, business logic, or functionality from your code running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

10. Your company provides a mobile voting application for a popular TV show, and 5 to 25 million viewers all vote in a 15-second timespan. What mechanism can you use to decouple the voting application from your back-end services that tally the votes? A. AWS CloudTrail B. Amazon Simple Queue Service (Amazon SQS) C. Amazon Redshift D. Amazon Simple Notification Service (Amazon SNS)

10. B. Amazon SQS is a fast, reliable, scalable, fully managed message queuing service that allows organizations to decouple the components of a cloud application. With Amazon SQS, organizations can transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. AWS CloudTrail records AWS API calls, and Amazon Redshift is a data warehouse, neither of which would be useful as an architecture component for decoupling components. Amazon SNS provides a messaging bus complement to Amazon SQS; however, it doesn't provide the decoupling of components necessary for this scenario.

10. When it comes to risk management, which of the following is true? A. AWS does not develop a strategic business plan; risk management and mitigation is entirely the responsibility of the customer. B. AWS has developed a strategic business plan to identify any risks and implemented controls to mitigate or manage those risks. Customers do not need to develop and maintain their own risk management plans. C. AWS has developed a strategic business plan to identify any risks and has implemented controls to mitigate or manage those risks. Customers should also develop and maintain their own risk management plans to ensure they are compliant with any relevant controls and certifications. D. Neither AWS nor the customer needs to worry about risk management, so no plan is needed from either party.

10. C. AWS has developed a strategic business plan, and customers should also develop and maintain their own risk management plans, therefore answer C is correct.

10. What is the format of an IAM policy? A. XML B. Key/value pairs C. JSON D. Tab-delimited text

10. C. An IAM policy is a JSON document.

10. As a Solutions Architect, how should you architect systems on AWS? A. You should architect for least cost. B. You should architect your AWS usage to take advantage of Amazon Simple Storage Service's (Amazon S3) durability. C. You should architect your AWS usage to take advantage of multiple regions and Availability Zones. D. You should architect with Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling to ensure capacity is available when needed.

10. C. Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.

10. You have created a custom Amazon VPC with both private and public subnets. You have created a NAT instance and deployed this instance to a public subnet. You have attached an EIP address and added your NAT to the route table. Unfortunately, instances in your private subnet still cannot access the Internet. What may be the cause of this? A. Your NAT is in a public subnet, but it needs to be in a private subnet. B. Your NAT should be behind an Elastic Load Balancer. C. You should disable source/destination checks on the NAT. D. Your NAT has been deployed on a Windows instance, but your other instances are Linux. You should redeploy the NAT onto a Linux instance.

10. C. You should disable source/destination checks on the NAT.

10. In the basic monitoring package for Amazon Elastic Compute Cloud (Amazon EC2), what Amazon CloudWatch metrics are available? A. Web server visible metrics such as number of failed transaction requests B. Operating system visible metrics such as memory utilization C. Database visible metrics such as number of connections D. Hypervisor visible metrics such as CPU utilization

10. D. Amazon CloudWatch metrics provide hypervisor visible metrics.

10. How does Amazon Simple Queue Service (Amazon SQS) deliver messages? A. Last In, First Out (LIFO) B. First In, First Out (FIFO) C. Sequentially D. Amazon SQS doesn't guarantee delivery of your messages in any particular order.

10. D. Amazon SQS does not guarantee in what order your messages will be delivered.

10. Your company has its primary production site in Western Europe and its DR site in the Asia Pacific. You need to configure DNS so that if your primary site becomes unavailable, you can fail DNS over to the secondary site. Which DNS routing policy would best achieve this? A. Weighted routing B. Geolocation routing C. Simple routing D. Failover routing

10. D. Failover-based routing would best achieve this objective.

11. Which security scheme is used by the AWS Multi-Factor Authentication (AWS MFA) token? A. Time-Based One-Time Password (TOTP) B. Perfect Forward Secrecy (PFC) C. Ephemeral Diffie Hellman (EDH) D. Split-Key Encryption (SKE)

11. A. A virtual MFA device uses a software application that generates six-digit authentication codes that are compatible with the TOTP standard, as described in RFC 6238.

11. Of the following options, what is an efficient way to fanout a single Amazon Simple Notification Service (Amazon SNS) message to multiple Amazon Simple Queue Service (Amazon SQS) queues? A. Create an Amazon SNS topic using Amazon SNS. Then create and subscribe multiple Amazon SQS queues sent to the Amazon SNS topic. B. Create one Amazon SQS queue that subscribes to multiple Amazon SNS topics. C. Amazon SNS allows exactly one subscriber to each topic, so fanout is not possible. D. Create an Amazon SNS topic using Amazon SNS. Create an application that subscribes to that topic and duplicates the message. Send copies to multiple Amazon SQS queues.

11. A. Multiple queues can subscribe to an Amazon SNS topic, which can enable parallel asynchronous processing.

11. Which of the following will occur when an Amazon Elastic Block Store (Amazon EBS)-backed Amazon EC2 instance in an Amazon VPC with an associated EIP is stopped and started? (Choose 2 answers) A. The EIP will be dissociated from the instance. B. All data on instance-store devices will be lost. C. All data on Amazon EBS devices will be lost. D. The ENI is detached. E. The underlying host for the instance is changed.

11. B, E. In the EC2-Classic network, the EIP will be disassociated with the instance; in the EC2-VPC network, the EIP remains associated with the instance. Regardless of the underlying network, a stop/start of an Amazon EBS-backed Amazon EC2 instance always changes the host computer.

11. Your company has 30 years of financial records that take up 15TB of on-premises storage. It is regulated that you maintain these records, but in the year you have worked for the company no one has ever requested any of this data. Given that the company data center is already filling the bandwidth of its Internet connection, what is an alternative way to store the data on the most appropriate cloud storage? A. AWS Import/Export to Amazon Simple Storage Service (Amazon S3) B. AWS Import/Export to Amazon Glacier C. Amazon Kinesis D. Amazon Elastic MapReduce (AWS EMR)

11. B. Because the Internet connection is full, the best solution will be based on using AWS Import/Export to ship the data. The most appropriate storage location for data that must be stored, but is very rarely accessed, is Amazon Glacier.

11. You are building the database tier for an enterprise application that gets occasional activity throughout the day. Which storage type should you select as your default option? A. Magnetic storage B. General Purpose Solid State Drive (SSD) C. Provisioned IOPS (SSD) D. Storage Area Network (SAN)-attached

11. B. General Purpose (SSD) volumes are generally the right choice for databases that have bursts of activity.

11. Which type of DNS record should you use to resolve a domain name to another domain name? A. An A record B. A CNAME record C. An SPF record D. A PTR record

11. B. The CNAME record maps a name to another name. It should be used only when there are no other records on that name.

11. The AWS control environment is in place for the secure delivery of AWS Cloud service offerings. Which of the following does the collective control environment NOT explicitly include? A. People B. Energy C. Technology D. Processes

11. B. The collective control environment includes people, processes, and technology necessary to establish and maintain an environment that supports the operating effectiveness of AWS control framework. Energy is not a discretely identified part of the control environment, therefore B is the correct answer.

11. You need to take a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume. How long will the volume be unavailable? A. It depends on the provisioned size of the volume. B. The volume will be available immediately. C. It depends on the amount of data stored on the volume. D. It depends on whether the attached instance is an Amazon EBS-optimized instance.

11. B. There is no delay in processing when commencing a snapshot.

11. Your WordPress website is hosted on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances that leverage Auto Scaling to provide high availability. To ensure that the content of the WordPress site is sustained through scale up and scale down events, you need a common file system that is shared between more than one Amazon EC2 instance. Which AWS Cloud service can meet this requirement? A. Amazon CloudFront B. Amazon ElastiCache C. Amazon Elastic File System (Amazon EFS) D. Amazon Elastic Beanstalk

11. C. Amazon EFS is a file storage service for Amazon EC2 instances. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for the content of the WordPress site running on more than one instance.

11. A cell phone company is running dynamic-content television commercials for a contest. They want their website to handle traffic spikes that come after a commercial airs. The website is interactive, offering personalized content to each visitor based on location, purchase history, and the current commercial airing. Which architecture will configure Auto Scaling to scale out to respond to spikes of demand, while minimizing costs during quiet periods? A. Set the minimum size of the Auto Scaling group so that it can handle high traffic volumes without needing to scale out. B. Create an Auto Scaling group large enough to handle peak traffic loads, and then stop some instances. Configure Auto Scaling to scale out when traffic increases using the stopped instances, so new capacity will come online quickly. C. Configure Auto Scaling to scale out as traffic increases. Configure the launch configuration to start new instances from a preconfigured Amazon Machine Image (AMI). D. Use Amazon CloudFront and Amazon Simple Storage Service (Amazon S3) to cache changing content, with the Auto Scaling group set as the origin. Configure Auto Scaling to have sufficient instances necessary to initially populate CloudFront and Amazon ElastiCache, and then scale in after the cache is fully populated.

11. C. Auto Scaling is designed to scale out based on an event like increased traffic while being cost effective when not needed.

11. Your company requires that all data sent to external storage be encrypted before being sent. Which Amazon Simple Storage Service (Amazon S3) encryption solution will meet this requirement? A. Server-Side Encryption (SSE) with AWS-managed keys (SSE-S3) B. SSE with customer-provided keys (SSE-C) C. Client-side encryption with customer-managed keys D. Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSEKMS)

11. C. If data must be encrypted before being sent to Amazon S3, client-side encryption must be used.

12. You are changing your application to move session state information off the individual Amazon Elastic Compute Cloud (Amazon EC2) instances to take advantage of the elasticity and cost benefits provided by Auto Scaling. Which of the following AWS Cloud services is best suited as an alternative for storing session state information? A. Amazon DynamoDB B. Amazon Redshift C. Amazon Storage Gateway D. Amazon Kinesis

12. A. Amazon DynamoDB is a NoSQL database store that is a great choice as an alternative due to its scalability, high-availability, and durability characteristics. Many platforms provide open-source, drop-in replacement libraries that allow you to store native sessions in Amazon DynamoDB. Amazon DynamoDB is a great candidate for a session storage solution in a share-nothing, distributed architecture.

12. DynamoDB tables may contain sensitive data that needs to be protected. Which of the following is a way for you to protect DynamoDB table content? (Choose 2 answers) A. DynamoDB encrypts all data server-side by default so nothing is required. B. DynamoDB can store data encrypted with a client-side encryption library solution before storing the data in DynamoDB. C. DynamoDB obfuscates all data stored so encryption is not required. D. DynamoDB can be used with the AWS Key Management Service to encrypt the data before storing the data in DynamoDB. E. DynamoDB should not be used to store sensitive information requiring protection.

12. B, D. Amazon DynamoDB does not have a server-side feature to encrypt items within a table. You need to use a solution outside of DynamoDB such as a client-side library to encrypt items before storing them, or a key management service like AWS Key Management Service to manage keys that are used to encrypt items before storing them in DynamoDB.

12. You have a popular web application that accesses data stored in an Amazon Simple Storage Service (Amazon S3) bucket. You expect the access to be very read-intensive, with expected request rates of up to 500 GETs per second from many clients. How can you increase the performance and scalability of Amazon S3 in this case? A. Turn on cross-region replication to ensure that data is served from multiple locations. B. Ensure randomness in the namespace by including a hash prefix to key names. C. Turn on server access logging. D. Ensure that key names are sequential to enable pre-fetch.

12. B. Amazon S3 scales automatically, but for request rates over 100 GETS per second, it helps to make sure there is some randomness in the key space. Replication and logging will not affect performance or scalability. Using sequential key names could have a negative effect on performance or scalability.

12. For an application running in the ap-northeast-1 region with three Availability Zones (apnortheast-1a, ap-northeast-1b, and ap-northeast-1c), which instance deployment provides high availability for the application that normally requires nine running Amazon Elastic Compute Cloud (Amazon EC2) instances but can run on a minimum of 65 percent capacity while Auto Scaling launches replacement instances in the remaining Availability Zones? A. Deploy the application on four servers in ap-northeast-1a and five servers in apnortheast-1b, and keep five stopped instances in ap-northeast-1a as reserve. B. Deploy the application on three servers in ap-northeast-1a, three servers in apnortheast-1b, and three servers in ap-northeast-1c. C. Deploy the application on six servers in ap-northeast-1b and three servers in apnortheast-1c. D. Deploy the application on nine servers in ap-northeast-1b, and keep nine stopped instances in ap-northeast-1a as reserve.

12. B. Auto Scaling will provide high availability across three Availability Zones with three Amazon EC2 instances in each and keep capacity above the required minimum capacity, even in the event of an entire Availability Zone becoming unavailable.

12. You are designing an e-commerce web application that will scale to potentially hundreds of thousands of concurrent users. Which database technology is best suited to hold the session state for large numbers of concurrent users? A. Relational database using Amazon Relational Database Service (Amazon RDS) B. NoSQL database table using Amazon DynamoDB C. Data warehouse using Amazon Redshift D. Amazon Simple Storage Service (Amazon S3)

12. B. NoSQL databases like Amazon DynamoDB excel at scaling to hundreds of thousands of requests with key/value access to user profile and session.

12. You are restoring an Amazon Elastic Block Store (Amazon EBS) volume from a snapshot. How long will it be before the data is available? A. It depends on the provisioned size of the volume. B. The data will be available immediately. C. It depends on the amount of data stored on the volume. D. It depends on whether the attached instance is an Amazon EBS-optimized instance.

12. B. The volume is created immediately but the data is loaded lazily. This means that the volume can be accessed upon creation, and if the data being requested has not yet been restored, it will be restored upon first request.

12. Which is a function that Amazon Route 53 does not perform? A. Domain registration B. DNS service C. Load balancing D. Health checks

12. C. Amazon Route 53 performs three main functions: domain registration, DNS service, and health checking.

12. Your company collects information from the point of sale registers at all of its franchise locations. Each month these processes collect 200TB of information stored in Amazon Simple Storage Service (Amazon S3). Analytics jobs taking 24 hours are performed to gather knowledge from this data. Which of the following will allow you to perform these analytics in a cost-effective way? A. Copy the data to a persistent Amazon Elastic MapReduce (Amazon EMR) cluster, and run the MapReduce jobs. B. Create an application that reads the information of the Amazon S3 bucket and runs it through an Amazon Kinesis stream. C. Run a transient Amazon EMR cluster, and run the MapReduce jobs against the data directly in Amazon S3. D. Launch a d2.8xlarge (32 vCPU, 244GB RAM) Amazon Elastic Compute Cloud (Amazon EC2) instance, and run an application to read and process each object sequentially.

12. C. Because the job is run monthly, a persistent cluster will incur unnecessary compute costs during the rest of the month. Amazon Kinesis is not appropriate because the company is running analytics as a batch job and not on a stream. A single large instance does not scale out to accommodate the large compute needs.

12. Who is responsible for the configuration of security groups in an AWS environment? A. The customer and AWS are both jointly responsible for ensuring that security groups are correctly and securely configured. B. AWS is responsible for ensuring that all security groups are correctly and securely configured. Customers do not need to worry about security group configuration. C. Neither AWS nor the customer is responsible for the configuration of security groups; security groups are intelligently and automatically configured using traffic heuristics. D. AWS provides the security group functionality as a service, but the customer is responsible for correctly and securely configuring their own security groups.

12. D. Customers are responsible for ensuring all of their security group configurations are appropriate for their own applications, therefore answer D is correct.

12. Your application polls an Amazon Simple Queue Service (Amazon SQS) queue frequently and returns immediately, often with empty ReceiveMessageResponses. What is one thing that can be done to reduce Amazon SQS costs? A. Pricing on Amazon SQS does not include a cost for service requests; therefore, there is no concern. B. Increase the timeout value for short polling to wait for messages longer before returning a response. C. Change the message visibility value to a higher number. D. Use long polling by supplying a WaitTimeSeconds of greater than 0 seconds when calling ReceiveMessage.

12. D. Long polling allows your application to poll the queue, and, if nothing is there, Amazon Elastic Compute Cloud (Amazon EC2) waits for an amount of time you specify (between 1 and 20 seconds). If a message arrives in that time, it is delivered to your application as soon as possible. If a message does not arrive in that time, you need to execute the ReceiveMessage function again.

12. How many VPC Peering connections are required for four VPCs located within the same AWS region to be able to send traffic to each of the others? A. 3 B. 4 C. 5 D. 6

12. D. Six VPC Peering connections are needed for each of the four VPCs to send traffic to the other.

13. Which of the following techniques can you use to help you meet Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements? (Choose 3 answers) A. DB snapshots B. DB option groups C. Read replica D. Multi-AZ deployment

13. A, C, D. DB snapshots allow you to back up and recover your data, while read replicas and a Multi-AZ deployment allow you to replicate your data and reduce the time to failover.

13. You have a workload that requires 15,000 consistent IOPS for data that must be durable. What combination of the following steps do you need? (Choose 2 answers) A. Use an Amazon Elastic Block Store (Amazon EBS)-optimized instance. B. Use an instance store. C. Use a Provisioned IOPS SSD volume. D. Use a magnetic volume.

13. A, C. B and D are incorrect because an instance store will not be durable and a magnetic volume offers an average of 100 IOPS. Amazon EBS-optimized instances reserve network bandwidth on the instance for IO, and Provisioned IOPS SSD volumes provide the highest consistent IOPS.

13. What is needed before you can enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers) A. Enable versioning on the bucket. B. Enable a lifecycle rule to migrate data to the second region. C. Enable static website hosting. D. Create an AWS Identity and Access Management (IAM) policy to allow Amazon S3 to replicate objects on your behalf.

13. A, D. You must enable versioning before you can enable cross-region replication, and Amazon S3 must have IAM permissions to perform the replication. Lifecycle rules migrate data from one storage class to another, not from one bucket to another. Static website hosting is not a prerequisite for replication.

13. Which DNS record can be used to store human-readable information about a server, network, and other accounting data with a host? A. A TXT record B. An MX record C. An SPF record D. A PTR record

13. A. A TXT record is used to store arbitrary and unformatted text with a host.

13. Which of the following are characteristics of the Auto Scaling service on AWS? (Choose 3 answers) A. Sends traffic to healthy instances B. Responds to changing conditions by adding or terminating Amazon Elastic Compute Cloud (Amazon EC2) instances C. Collects and tracks metrics and sets alarms D. Delivers push notifications E. Launches instances from a specified Amazon Machine Image (AMI) F. Enforces a minimum number of running Amazon EC2 instances

13. B, E, F. Auto Scaling responds to changing conditions by adding or terminating instances, launches instances from an AMI specified in the launch configuration associated with the Auto Scaling group, and enforces a minimum number of instances in the min-size parameter of the Auto Scaling group.

13. Which of the following AWS resources would you use in order for an EC2-VPC instance to resolve DNS names outside of AWS? A. A VPC peering connection B. A DHCP option set C. A routing rule D. An IGW

13. B. A DHCP option set allows customers to define DNS servers for DNS name resolution, establish domain names for instances within an Amazon VPC, define NTP servers, and define the NetBIOS name servers.

13. A media sharing application is producing a very high volume of data in a very short period of time. Your back-end services are unable to manage the large volume of transactions. What option provides a way to manage the flow of transactions to your back-end services? A. Store the inbound transactions in an Amazon Relational Database Service (Amazon RDS) instance so that your back-end services can retrieve them as time permits. B. Use an Amazon Simple Queue Service (Amazon SQS) queue to buffer the inbound transactions. C. Use an Amazon Simple Notification Service (Amazon SNS) topic to buffer the inbound transactions. D. Store the inbound transactions in an Amazon Elastic MapReduce (Amazon EMR) cluster so that your back-end services can retrieve them as time permits.

13. B. Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS should be used to decouple the large volume of inbound transactions, allowing the back-end services to manage the level of throughput without losing messages.

13. You have launched an Amazon Linux Elastic Compute Cloud (Amazon EC2) instance into EC2-Classic, and the instance has successfully passed the System Status Check and Instance Status Check. You attempt to securely connect to the instance via Secure Shell (SSH) and receive the response, "WARNING: UNPROTECTED PRIVATE KEY FILE," after which the login fails. Which of the following is the cause of the failed login? A. You are using the wrong private key. B. The permissions for the private key are too insecure for the key to be trusted. C. A security group rule is blocking the connection. D. A security group rule has not been associated with the private key.

13. B. If your private key can be read or written to by anyone but you, then SSH ignores your key.

13. What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) long polling timeout? A. 10 seconds B. 20 seconds C. 30 seconds D. 1 hour

13. B. The maximum time for an Amazon SQS long polling timeout is 20 seconds.

13. Which of the following is NOT a recommended approach for customers trying to achieve strong compliance and governance over an entire IT control environment? A. Take a holistic approach: review information available from AWS together with all other information, and document all compliance requirements. B. Verify that all control objectives are met and all key controls are designed and operating effectively. C. Implement generic control objectives that are not specifically designed to meet their organization's compliance requirements. D. Identify and document controls owned by all third parties.

13. C. Customers should ensure that they implement control objectives that are designed to meet their organization's own unique compliance requirements, therefore answer C is correct.

13. Which service allows you to process nearly limitless streams of data in flight? A. Amazon Kinesis Firehose B. Amazon Elastic MapReduce (Amazon EMR) C. Amazon Redshift D. Amazon Kinesis Streams

13. D. The Amazon Kinesis services enable you to work with large data streams. Within the Amazon Kinesis family of services, Amazon Kinesis Firehose saves streams to AWS storage services, while Amazon Kinesis Streams provide the ability to process the data in the stream.

14. Which of the following are best practices for managing AWS Identity and Access Management (IAM) user access keys? (Choose 3 answers) A. Embed access keys directly into application code. B. Use different access keys for different applications. C. Rotate access keys periodically. D. Keep unused access keys for an indefinite period of time. E. Configure Multi-Factor Authentication (MFA) for your most sensitive operations.

14. B, C, E. You should protect AWS user access keys like you would your credit card numbers or any other sensitive secret. Use different access keys for different applications so that you can isolate the permissions and revoke the access keys for individual applications if an access key is exposed. Remember to change access keys on a regular basis. For increased security, it is recommended to configure MFA for any sensitive operations. Remember to remove any IAM users that are no longer needed so that the user's access to your resources is removed. Always avoid having to embed access keys in an application.

14. Your company has 100TB of financial records that need to be stored for seven years by law. Experience has shown that any record more than one-year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost efficient manner? A. Store the data on Amazon Elastic Block Store (Amazon EBS) volumes attached to t2.micro instances. B. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year and delete the object after seven years. C. Store the data in Amazon DynamoDB and run daily script to delete data older than seven years. D. Store the data in Amazon Elastic MapReduce (Amazon EMR).

14. B. Amazon S3 is the most cost effective storage on AWS, and lifecycle policies are a simple and effective feature to address the business requirements.

14. When using Amazon Relational Database Service (Amazon RDS) Multi-AZ, how can you offload read requests from the primary? (Choose 2 answers) A. Configure the connection string of the clients to connect to the secondary node and perform reads while the primary is used for writes. B. Amazon RDS automatically sends writes to the primary and sends reads to the secondary. C. Add a read replica DB instance, and configure the client's application logic to use a read-replica. D. Create a caching environment using ElastiCache to cache frequently used data. Update the application logic to read/write from the cache.

14. C, D. Amazon RDS allows for the creation of one or more read-replicas for many engines that can be used to handle reads. Another common pattern is to create a cache using Memcached and Amazon ElastiCache to store frequently used queries. The secondary slave DB Instance is not accessible and cannot be used to offload queries.

14. What combination of services enable you to copy daily 50TB of data to Amazon storage, process the data in Hadoop, and store the results in a large data warehouse? A. Amazon Kinesis, Amazon Data Pipeline, Amazon Elastic MapReduce (Amazon EMR), and Amazon Elastic Compute Cloud (Amazon EC2) B. Amazon Elastic Block Store (Amazon EBS), Amazon Data Pipeline, Amazon EMR, and Amazon Redshift C. Amazon Simple Storage Service (Amazon S3), Amazon Data Pipeline, Amazon EMR, and Amazon Redshift D. Amazon S3, Amazon Simple Workflow, Amazon EMR, and Amazon DynamoDB

14. C. Amazon Data Pipeline allows you to run regular Extract, Transform, Load (ETL) jobs on Amazon and on-premises data sources. The best storage for large data is Amazon S3, and Amazon Redshift is a large-scale data warehouse service.

14. Which resource record set would not be allowed for the hosted zone example.com? A. www.example.com B. www.aws.example.com C. www.example.ca D. www.beta.example.com

14. C. The resource record sets contained in a hosted zone must share the same suffix.

14. Which of the following is the Amazon side of an Amazon VPN connection? A. An EIP B. A CGW C. An IGW D. A VPG

14. D. A CGW is the customer side of a VPN connection, and an IGW connects a network to the Internet. A VPG is the Amazon side of a VPN connection.

14. Why is the launch configuration referenced by the Auto Scaling group instead of being part of the Auto Scaling group? A. It allows you to change the Amazon Elastic Compute Cloud (Amazon EC2) instance type and Amazon Machine Image (AMI) without disrupting the Auto Scaling group. B. It facilitates rolling out a patch to an existing set of instances managed by an Auto Scaling group. C. It allows you to change security groups associated with the instances launched without having to make changes to the Auto Scaling group. D. All of the above E. None of the above

14. D. A, B, and C are all true statements about launch configurations being loosely coupled and referenced by the Auto Scaling group instead of being part of the Auto Scaling group.

14. Which of the following public identity providers are supported by Amazon Cognito Identity? A. Amazon B. Google C. Facebook D. All of the above

14. D. Amazon Cognito Identity supports public identity providers—Amazon, Facebook, and Google—as well as unauthenticated identities.

14. What is the longest configurable message retention period for Amazon Simple Queue Service (Amazon SQS)? A. 30 minutes B. 4 days C. 30 seconds D. 14 days

14. D. The longest configurable message retention period for Amazon SQS is 14 days.

*You are running a mission-critical three-tier application on AWS and have enabled Amazon CloudWatch metrics for a one-minute data point. How far back you can go and see the metrics?* * One Week * 24 hours * One month * 15 days

15 days * When CloudWatch is enabled for a one-minute data point, the retention is 15 days.

15. You are building a large order processing system and are responsible for securing the database. Which actions will you take to protect the data? (Choose 3 answers) A. Adjust AWS Identity and Access Management (IAM) permissions for administrators. B. Configure security groups and network Access Control Lists (ACLs) to limit network access. C. Configure database users, and grant permissions to database objects. D. Install anti-virus software on the Amazon RDS DB Instance.

15. A, B, C. Protecting your database requires a multilayered approach that secures the infrastructure, the network, and the database itself. Amazon RDS is a managed service and direct access to the OS is not available.

15. You need to implement a service to scan Application Program Interface (API) calls and related events' history to your AWS account. This service will detect things like unused permissions, overuse of privileged accounts, and anomalous logins. Which of the following AWS Cloud services can be leveraged to implement this service? (Choose 3 answers) A. AWS CloudTrail B. Amazon Simple Storage Service (Amazon S3) C. Amazon Route 53 D. Auto Scaling E. AWS Lambda

15. A, B, E. You can enable AWS CloudTrail in your AWS account to get logs of API calls and related events' history in your account. AWS CloudTrail records all of the API access events as objects in an Amazon S3 bucket that you specify at the time you enable AWS CloudTrail. You can take advantage of Amazon S3's bucket notification feature by directing Amazon S3 to publish object-created events to AWS Lambda. Whenever AWS CloudTrail writes logs to your Amazon S3 bucket, Amazon S3 can then invoke your AWS Lambda function by passing the Amazon S3 object-created event as a parameter. The AWS Lambda function code can read the log object and process the access records logged by AWS CloudTrail.

15. An Auto Scaling group may use: (Choose 2 answers) A. On-Demand Instances B. Stopped instances C. Spot Instances D. On-premises instances E. Already running instances if they use the same Amazon Machine Image (AMI) as the Auto Scaling group's launch configuration and are not already part of another Auto Scaling group

15. A, C. An Auto Scaling group may use On-Demand and Spot Instances. An Auto Scaling group may not use already stopped instances, instances running someplace other than AWS, and already running instances not started by the Auto Scaling group itself.

15. Which feature of AWS is designed to permit calls to the platform from an Amazon Elastic Compute Cloud (Amazon EC2) instance without needing access keys placed on the instance? A. AWS Identity and Access Management (IAM) instance profile B. IAM groups C. IAM roles D. Amazon EC2 key pairs

15. A. An instance profile is a container for an IAM role that you can use to pass role information to an Amazon EC2 instance when the instance starts.

15. What is the default limit for the number of Amazon VPCs that a customer may have in a region? A. 5 B. 6 C. 7 D. There is no default maximum number of VPCs within a region.

15. A. The default limit for the number of Amazon VPCs that a customer may have in a region is 5.

15. Amazon Simple Storage Service (S3) bucket policies can restrict access to an Amazon S3 bucket and objects by which of the following? (Choose 3 answers) A. Company name B. IP address range C. AWS account D. Country of origin E. Objects with a specific prefix

15. B, C, E. Amazon S3 bucket policies cannot specify a company name or a country or origin, but they can specify request IP range, AWS account, and a prefix for objects that can be accessed.

15. Your company has 50,000 weather stations around the country that send updates every 2 seconds. What service will enable you to ingest this stream of data and store it to Amazon Simple Storage Service (Amazon S3) for future processing? A. Amazon Simple Queue Service (Amazon SQS) B. Amazon Kinesis Firehose C. Amazon Elastic Compute Cloud (Amazon EC2) D. Amazon Data Pipeline

15. B. Amazon Kinesis Firehose allows you to ingest massive streams of data and store the data on Amazon S3 (as well as Amazon Redshift and Amazon Elasticsearch).

15. Which port number is used to serve requests by DNS? A. 22 B. 53 C. 161 D. 389

15. B. DNS uses port number 53 to serve requests.

15. What is the default message retention period for Amazon Simple Queue Service (Amazon SQS)? A. 30 minutes B. 4 days C. 30 seconds D. 14 days

15. B. The default message retention period that can be set in Amazon SQS is four days.

15. How can you connect to a new Linux instance using SSH? A. Decrypt the root password. B. Using a certificate C. Using the private half of the instance's key pair D. Using Multi-Factor Authentication (MFA)

15. C. The public half of the key pair is stored on the instance, and the private half can then be used to connect via SSH.

16. Your team manages a popular website running Amazon Relational Database Service (Amazon RDS) MySQL back end. The Marketing department has just informed you about an upcoming television commercial that will drive thousands of new visitors to the website. How can you prepare your database to handle the load? (Choose 3 answers) A. Vertically scale the DB Instance by selecting a more powerful instance class. B. Create read replicas to offload read requests and update your application. C. Upgrade the storage from Magnetic volumes to General Purpose Solid State Drive (SSD) volumes. D. Upgrade to Amazon Redshift for faster columnar storage.

16. A, B, C. Vertically scaling up is one of the simpler options that can give you additional processing power without making any architectural changes. Read replicas require some application changes but let you scale processing power horizontally. Finally, busy databases are often I/O- bound, so upgrading storage to General Purpose (SSD) or Provisioned IOPS (SSD) can often allow for additional request processing.

16. Amazon CloudWatch supports which types of monitoring plans? (Choose 2 answers) A. Basic monitoring, which is free B. Basic monitoring, which has an additional cost C. Ad hoc monitoring, which is free D. Ad hoc monitoring, which has an additional cost E. Detailed monitoring, which is free F. Detailed monitoring, which has an additional cost

16. A, F. Amazon CloudWatch has two plans: basic, which is free, and detailed, which has an additional cost. There is no ad hoc plan for Amazon CloudWatch.

16. Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what kinds of operations is it possible to get stale data as a result of eventual consistency? (Choose 2 answers) A. GET after PUT of a new object B. GET or LIST after a DELETE C. GET after overwrite PUT (PUT to an existing key) D. DELETE after PUT of new object

16. B, C. Amazon S3 provides read-after-write consistency for PUTs to new objects (new key), but eventual consistency for GETs and DELETEs of existing objects (existing key).

16. VM Import/Export can import existing virtual machines as: (Choose 2 answers) A. Amazon Elastic Block Store (Amazon EBS) volumes B. Amazon Elastic Compute Cloud (Amazon EC2) instances C. Amazon Machine Images (AMIs) D. Security groups

16. B, C. These are the possible outputs of VM Import/Export.

16. Which of the following Amazon Virtual Private Cloud (Amazon VPC) elements acts as a stateless firewall? A. Security group B. Network Access Control List (ACL) C. Network Address Translation (NAT) instance D. An Amazon VPC endpoint

16. B. A network ACL is an optional layer of security for your Amazon VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your Amazon VPC.

16. Government regulations require that your company maintain all correspondence for a period of seven years for compliance reasons. What is the best storage mechanism to keep this data secure in a cost-effective manner? A. Amazon S3 B. Amazon Glacier C. Amazon EBS D. Amazon EFS

16. B. Amazon Glacier enables businesses and organizations to retain data for months, years, or decades, easily and cost effectively. With Amazon Glacier, customers can retain more of their data for future analysis or reference, and they can focus on their business instead of operating and maintaining their storage infrastructure. Customers can also use Amazon Glacier Vault Lock to meet regulatory and compliance archiving requirements.

16. You are responsible for your company's AWS resources, and you notice a significant amount of traffic from an IP address in a foreign country in which your company does not have customers. Further investigation of the traffic indicates the source of the traffic is scanning for open ports on your EC2-VPC instances. Which one of the following resources can deny the traffic from reaching the instances? A. Security group B. Network ACL C. NAT instance D. An Amazon VPC endpoint

16. B. Network ACL rules can deny traffic.

16. Your organization uses Chef heavily for its deployment automation. What AWS cloud service provides integration with Chef recipes to start new application server instances, configure application server software, and deploy applications? A. AWS Elastic Beanstalk B. Amazon Kinesis C. AWS OpsWorks D. AWS CloudFormation

16. C. AWS OpsWorks uses Chef recipes to start new app server instances, configure application server software, and deploy applications. Organizations can leverage Chef recipes to automate operations like software configurations, package installations, database setups, server scaling, and code deployment.

16. Which protocol is primarily used by DNS to serve requests? A. Transmission Control Protocol (TCP) B. Hyper Text Transfer Protocol (HTTP) C. File Transfer Protocol (FTP) D. User Datagram Protocol (UDP)

16. D. DNS primarily uses UDP to serve requests.

16. Amazon Simple Notification Service (Amazon SNS) is a push notification service that lets you send individual or multiple messages to large numbers of recipients. What types of clients are supported? A. Java and JavaScript clients that support publisher and subscriber types B. Producers and consumers supported by C and C++ clients C. Mobile and AMQP support for publisher and subscriber client types D. Publisher and subscriber client types

16. D. With Amazon SNS, you send individual or multiple messages to large numbers of recipients using publisher and subscriber client types.

*If you are using Amazon RDS Provisioned IOPS storage with a Microsoft SQL Server database engine, what is the maximum size RDS volume you can have by default?* * 1TB * 32TB * 16TB * 6TB * 500GB

16TB

17. What must be done to host a static website in an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 3 answers) A. Configure the bucket for static hosting and specify an index and error document. B. Create a bucket with the same name as the website. C. Enable File Transfer Protocol (FTP) on the bucket. D. Make the objects in the bucket world-readable. E. Enable HTTP on the bucket.

17. A, B, D. A, B, and D are required, and normally you also set a friendly CNAME to the bucket URL. Amazon S3 does not support FTP transfers, and HTTP does not need to be enabled.

17. Elastic Load Balancing health checks may be: (Choose 3 answers) A. A ping B. A key pair verification C. A connection attempt D. A page request E. An Amazon Elastic Compute Cloud (Amazon EC2) instance status check

17. A, C, D. An Elastic Load Balancing health check may be a ping, a connection attempt, or a page that is checked.

17. Your company provides media content via the Internet to customers through a paid subscription model. You leverage Amazon CloudFront to distribute content to your customers with low latency. What approach can you use to serve this private content securely to your paid subscribers? A. Provide signed Amazon CloudFront URLs to authenticated users to access the paid content. B. Use HTTPS requests to ensure that your objects are encrypted when Amazon CloudFront serves them to viewers. C. Configure Amazon CloudFront to compress the media files automatically for paid subscribers. D. Use the Amazon CloudFront geo restriction feature to restrict access to all of the paid subscription media at the country level.

17. A. Many companies that distribute content via the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, such as users who have paid a fee. To serve this private content securely using Amazon CloudFront, you can require that users access your private content by using special Amazon CloudFront-signed URLs or signed cookies.

17. Which protocol is used by DNS when response data size exceeds 512 bytes? A. Transmission Control Protocol (TCP) B. Hyper Text Transfer Protocol (HTTP) C. File Transfer Protocol (FTP) D. User Datagram Protocol (UDP)

17. A. The TCP protocol is used by DNS server when the response data size exceeds 512 bytes or for tasks such as zone transfers.

17. A firm is moving its testing platform to AWS to provide developers with instant access to clean test and development environments. The primary requirement for the firm is to make environments easily reproducible and fungible. What service will help the firm meet their requirements? A. AWS CloudFormation B. AWS Config C. Amazon Redshift D. AWS Trusted Advisor

17. A. With AWS CloudFormation, you can reuse your template to set up your resources consistently and repeatedly. Just describe your resources once and then provision the same resources over and over in multiple stacks.

17. Which of the following can be used to address an Amazon Elastic Compute Cloud (Amazon EC2) instance over the web? (Choose 2 answers) A. Windows machine name B. Public DNS name C. Amazon EC2 instance ID D. Elastic IP address

17. B, D. Neither the Windows machine name nor the Amazon EC2 instance ID can be resolved into an IP address to access the instance.

17. In Amazon Simple Workflow Service (Amazon SWF), a decider is responsible for what? A. Executing each step of the work B. Defining work coordination logic by specifying work sequencing, timing, and failure conditions C. Executing your workflow D. Registering activities and workflow with Amazon SWF

17. B. The decider schedules the activity tasks and provides input data to the activity workers. The decider also processes events that arrive while the workflow is in progress and closes the workflow when the objective has been completed.

17. You are building a photo management application that maintains metadata on millions of images in an Amazon DynamoDB table. When a photo is retrieved, you want to display the metadata next to the image. Which Amazon DynamoDB operation will you use to retrieve the metadata attributes from the table? A. Scan operation B. Search operation C. Query operation D. Find operation

17. C. Query is the most efficient operation to find a single item in a large table.

17. Which of the following is the security protocol supported by Amazon VPC? A. SSH B. Advanced Encryption Standard (AES) C. Point-to-Point Tunneling Protocol (PPTP) D. IPsec

17. D. IPsec is the security protocol supported by Amazon VPC.

17. Which of the following is the most recent version of the AWS digital signature calculation process? A. Signature Version 1 B. Signature Version 2 C. Signature Version 3 D. Signature Version 4

17. D. The Signature Version 4 signing process describes how to add authentication information to AWS requests. For security, most requests to AWS must be signed with an access key (Access Key ID [AKI] and Secret Access Key [SAK]). If you use the AWS Command Line Interface (AWS CLI) or one of the AWS Software Development Kits (SDKs), those tools automatically sign requests for you based on credentials that you specify when you configure the tools. However, if you make direct HTTP or HTTPS calls to AWS, you must sign the requests yourself.

18. Using the correctly decrypted Administrator password and RDP, you cannot log in to a Windows instance you just launched. Which of the following is a possible reason? A. There is no security group rule that allows RDP access over port 3389 from your IP address. B. The instance is a Reserved Instance. C. The instance is not using enhanced networking. D. The instance is not an Amazon EBS-optimized instance.

18. A. None of the other options will have any effect on the ability to connect.

18. You are creating an Amazon DynamoDB table that will contain messages for a social chat application. This table will have the following attributes: Username (String), Timestamp (Number), Message (String). Which attribute should you use as the partition key? The sort key? A. Username, Timestamp B. Username, Message C. Timestamp, Message D. Message, Timestamp

18. A. Using the Username as a partition key will evenly spread your users across the partitions. Messages are often filtered down by time range, so Timestamp makes sense as a sort key.

18. When an Amazon Elastic Compute Cloud (Amazon EC2) instance registered with an Elastic Load Balancing load balancer using connection draining is deregistered or unhealthy, which of the following will happen? (Choose 2 answers) A. Immediately close all existing connections to that instance. B. Keep the connections open to that instance, and attempt to complete in-flight requests. C. Redirect the requests to a user-defined error page like "Oops this is embarrassing" or "Under Construction." D. Forcibly close all connections to that instance after a timeout period. E. Leave the connections open as long as the load balancer is running.

18. B, C. When connection draining is enabled, the load balancer will stop sending requests to a deregistered or unhealthy instance and attempt to complete in-flight requests until a connection draining timeout period is reached, which is 300 seconds by default.

18. Your company's IT management team is looking for an online tool to provide recommendations to save money, improve system availability and performance, and to help close security gaps. What can help the management team? A. Cloud-init B. AWS Trusted Advisor C. AWS Config D. Configuration Recorder

18. B. AWS Trusted Advisor inspects your AWS environment and makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. AWS Trusted Advisor draws upon best practices learned from the aggregated operational history of serving hundreds of thousands of AWS customers.

18. Your company provides transcoding services for amateur producers to format their short films to a variety of video formats. Which service provides the best option for storing the videos? A. Amazon Glacier B. Amazon Simple Storage Service (Amazon S3) C. Amazon Relational Database Service (Amazon RDS) D. AWS Storage Gateway

18. B. Amazon S3 provides highly durable and available storage for a variety of content. Amazon S3 can be used as a big data object store for all of the videos. Amazon S3's low cost combined with its design for durability of 99.999999999% and for up to 99.99% availability make it a great storage choice for transcoding services.

18. Which of the following is the name of the feature within Amazon Virtual Private Cloud (Amazon VPC) that allows you to launch Amazon Elastic Compute Cloud (Amazon EC2) instances on hardware dedicated to a single customer? A. Amazon VPC-based tenancy B. Dedicated tenancy C. Default tenancy D. Host-based tenancy

18. B. Dedicated instances are physically isolated at the host hardware level from your instances that aren't dedicated instances and from instances that belong to other AWS accounts.

18. You have valuable media files hosted on AWS and want them to be served only to authenticated users of your web application. You are concerned that your content could be stolen and distributed for free. How can you protect your content? A. Use static web hosting. B. Generate pre-signed URLs for content in the web application. C. Use AWS Identity and Access Management (IAM) policies to restrict access. D. Use logging to track your content.

18. B. Pre-signed URLs allow you to grant time-limited permission to download objects from an Amazon Simple Storage Service (Amazon S3) bucket. Static web hosting generally requires world-read access to all content. AWS IAM policies do not know who the authenticated users of the web app are. Logging can help track content loss, but not prevent it.

18. What are the different hosted zones that can be created in Amazon Route 53? 1. Public hosted zone 2. Global hosted zone 3. Private hosted zone A. 1 and 2 B. 1 and 3 C. 2 and 3 D. 1, 2, and 3

18. B. Using Amazon Route 53, you can create two types of hosted zones: public hosted zones and private hosted zones.

18. Can an Amazon Simple Notification Service (Amazon SNS) topic be recreated with a previously used topic name? A. Yes. The topic name should typically be available after 24 hours after the previous topic with the same name has been deleted. B. Yes. The topic name should typically be available after 1-3 hours after the previous topic with the same name has been deleted. C. Yes. The topic name should typically be available after 30-60 seconds after the previous topic with the same name has been deleted. D. At this time, this feature is not supported.

18. C. Topic names should typically be available for reuse approximately 30-60 seconds after the previous topic with the same name has been deleted. The exact time will depend on the number of subscriptions active on the topic; topics with a few subscribers will be available instantly for reuse, while topics with larger subscriber lists may take longer.

18. Which of the following Amazon VPC resources would you use in order for EC2-VPC instances to send traffic directly to Amazon S3? A. Amazon S3 gateway B. IGW C. CGW D. VPC endpoint

18. D. An Amazon VPC endpoint enables you to create a private connection between your Amazon VPC and another AWS service without requiring access over the Internet or through a NAT device, VPN connection, or AWS Direct Connect.

19. Amazon Glacier is well-suited to data that is which of the following? (Choose 2 answers) A. Is infrequently or rarely accessed B. Must be immediately available when needed C. Is available after a three- to five-hour restore period D. Is frequently erased within 30 days

19. A, C. Amazon Glacier is optimized for long-term archival storage and is not suited to data that needs immediate access or short-lived data that is erased within 90 days.

19. What properties of an Amazon VPC must be specified at the time of creation? (Choose 2 answers) A. The CIDR block representing the IP address range B. One or more subnets for the Amazon VPC C. The region for the Amazon VPC D. Amazon VPC Peering relationships

19. A, C. The CIDR block is specified upon creation and cannot be changed. An Amazon VPC is associated with exactly one region which must be specified upon creation. You can add a subnet to an Amazon VPC any time after it has been created, provided its address range falls within the Amazon VPC CIDR block and does not overlap with the address range of any existing CIDR block. You can set up peering relationships between Amazon VPCs after they have been created.

19. Your company works with data that requires frequent audits of your AWS environment to ensure compliance with internal policies and best practices. In order to perform these audits, you need access to historical configurations of your resources to evaluate relevant configuration changes. Which service will provide the necessary information for your audits? A. AWS Config B. AWS Key Management Service (AWS KMS) C. AWS CloudTrail D. AWS OpsWorks

19. A. AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config, you can discover existing and deleted AWS resources, determine your overall compliance against rules, and dive into configuration details of a resource at any point in time. These capabilities enable compliance auditing.

19. A week before Cyber Monday last year, your corporate data center experienced a failed air conditioning unit that caused flooding into the server racks. The resulting outage cost your company significant revenue. Your CIO mandated a move to the cloud, but he is still concerned about catastrophic failures in a data center. What can you do to alleviate his concerns? A. Distribute the architecture across multiple Availability Zones. B. Use an Amazon Virtual Private Cloud (Amazon VPC) with subnets. C. Launch the compute for the processing services in a placement group. D. Purchase Reserved Instances for the processing services instances.

19. A. An Availability Zone consists of one or more physical data centers. Availability zones within a region provide inexpensive, low-latency network connectivity to other zones in the same region. This allows you to distribute your application across data centers. In the event of a catastrophic failure in a data center, the application will continue to handle requests.

19. Which of the following statements about Amazon DynamoDB tables are true? (Choose 2 answers) A. Global secondary indexes can only be created when the table is being created. B. Local secondary indexes can only be created when the table is being created. C. You can only have one global secondary index. D. You can only have one local secondary index.

19. B, D. You can only have a single local secondary index, and it must be created at the same time the table is created. You can create many global secondary indexes after the table has been created.

19. Elastic Load Balancing supports which of the following types of load balancers? (Choose 3 answers) A. Cross-region B. Internet-facing C. Interim D. Itinerant E. Internal F. Hypertext Transfer Protocol Secure (HTTPS) using Secure Sockets Layer (SSL)

19. B, E, F. Elastic Load Balancing supports Internet-facing, internal, and HTTPS load balancers.

19. You have a workload that requires 1 TB of durable block storage at 1,500 IOPS during normal use. Every night there is an Extract, Transform, Load (ETL) task that requires 3,000 IOPS for 15 minutes. What is the most appropriate volume type for this workload? A. Use a Provisioned IOPS SSD volume at 3,000 IOPS. B. Use an instance store. C. Use a general-purpose SSD volume. D. Use a magnetic volume.

19. C. A short period of heavy traffic is exactly the use case for the bursting nature of general-purpose SSD volumes—the rest of the day is more than enough time to build up enough IOPS credits to handle the nightly task. Instance stores are not durable, magnetic volumes cannot provide enough IOPS, and to set up a Provisioned IOPS SSD volume to handle the peak would mean spending money for more IOPS than you need.

19. Which of the following describes how Amazon Elastic MapReduce (Amazon EMR) protects access to the cluster? A. The master node and the slave nodes are launched into an Amazon Virtual Private Cloud (Amazon VPC). B. The master node supports a Virtual Private Network (VPN) connection from the key specified at cluster launch. C. The master node is launched into a security group that allows Secure Shell (SSH) and service access, while the slave nodes are launched into a separate security group that only permits communication with the master node. D. The master node and slave nodes are launched into a security group that allows SSH and service access.

19. C. Amazon EMR starts your instances in two Amazon Elastic Compute Cloud (Amazon EC2) security groups, one for the master and another for the slaves. The master security group has a port open for communication with the service. It also has the SSH port open to allow you to securely connect to the instances via SSH using the key specified at startup. The slaves start in a separate security group, which only allows interaction with the master instance. By default, both security groups are set up to prevent access from external sources, including Amazon EC2 instances belonging to other customers. Because these are security groups in your account, you can reconfigure them using the standard Amazon EC2 tools or dashboard.

19. What should you do in order to grant a different AWS account permission to your Amazon Simple Queue Service (Amazon SQS) queue? A. Share credentials to your AWS account and have the other account's applications use your account's credentials to access the Amazon SQS queue. B. Create a user for that account in AWS Identity and Access Management (IAM) and establish an IAM policy that grants access to the queue. C. Create an Amazon SQS policy that grants the other account access. D. Amazon Virtual Private Cloud (Amazon VPC) peering must be used to achieve this.

19. C. The main difference between Amazon SQS policies and IAM policies is that an Amazon SQS policy enables you to grant a different AWS account permission to your Amazon SQS queues, but an IAM policy does not.

19. Amazon Route 53 cannot route queries to which AWS resource? A. Amazon CloudFront distribution B. Elastic Load Balancing load balancer C. Amazon EC2 D. AWS OpsWorks

19. D. Amazon Route 53 can route queries to a variety of AWS resources such as an Amazon CloudFront distribution, an Elastic Load Balancing load balancer, an Amazon EC2 instance, a website hosted in an Amazon S3 bucket, and an Amazon Relational Database (Amazon RDS).

<p class="Question"><span lang="EN-US">Your company&rsquo;s IT management team is looking for an online tool to provide recommendations to save money, improve system availability and performance, and to help close security gaps. What can help the management team?</span> 1. <p class="Option"><span lang="EN-US">Cloud-init</span> 2. <p class="Option"><span lang="EN-US">AWS Trusted Advisor</span> 3. AWS Config 4. Configuration Recorder

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">AWS Trusted Advisor inspects your AWS environment and makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. AWS Trusted Advisor draws upon best practices learned from the aggregated operational history of serving hundreds of thousands of AWS customers.</span>

Which port number is used to serve requests by DNS? 1. 22 2. 53 3. 161 4. 389

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">DNS uses port number 53 to serve requests.</span>

Where do you register a domain name? 1. With your local government authority 2. With a domain registrar 3. With InterNIC directly 4. With the Internet Assigned Numbers Authority (IANA)

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">Domain names are registered with a domain registrar, which then registers the name to InterNIC.</span>

Your company wants to extend their existing Microsoft Active Directory capability into an Amazon Virtual Private Cloud (Amazon VPC) without establishing a trust relationship with the existing on-premises Active Directory. Which of the following is the best approach to achieve this goal? 1. Create and connect an AWS Directory Service AD Connector. 2. Create and connect an AWS Directory Service Simple AD. 3. Create and connect an AWS Directory Service for Microsoft Active Directory (Enterprise Edition). 4. None of the above.

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">Simple AD is a Microsoft Active Directory-compatible directory that is powered by Samba 4. Simple AD supports commonly used Active Directory features such as user accounts, group memberships, domain-joining Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux and Microsoft Windows, Kerberos-based Single Sign-On (SSO), and group policies.</span>

In Amazon Simple Workflow Service (Amazon SWF), a decider is responsible for what? 1. Executing each step of the work 2. Defining work coordination logic by specifying work sequencing, timing, and failure conditions 3. Executing your workflow 4. Registering activities and workflow with Amazon SWF

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">The decider schedules the activity tasks and provides input data to the activity workers. The decider also processes events that arrive while the workflow is in progress and closes the workflow when the objective has been completed.</span>

<p class="Question"><span lang="EN-US">What are the different hosted zones that can be created in Amazon Route 53?</span><br><p class="Question"><span lang="EN-US">(1) Public hosted zone</span><br><p class="Question"><span lang="EN-US">(2) Global hosted zone</span><br><p class="Question"><span lang="EN-US">(3) Private hosted zone</span> 1. 1 and 2 2. 1 and 3 3. <p class="Option"><span lang="EN-US">2 and 3</span> 4. 1, 2, and 3

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">Using Amazon Route 53, you can create two types of hosted zones: public hosted zones and private hosted zones.</span>

<p class="Question"><span lang="EN-US">You host a web application across multiple AWS regions in the world, and you need to configure your DNS so that your end users will get the fastest network performance possible. Which routing policy should you apply?</span> 1. <p class="Option"><span lang="EN-US">Geolocation routing</span> 2. Latency-based routing 3. <p class="Option"><span lang="EN-US">Simple routing</span> 4. <p class="Option"><span lang="EN-US">Weighted routing</span>

2 <p class="Answer"><strong><span lang="EN-US">B.</span></strong><br><p class="Explanation"><span lang="EN-US">You want your users to have the fastest network access possible. To do this, you would use latency-based routing. Geolocation routing would not achieve this as well as latency-based routing, which is specifically geared toward measuring the latency and thus would direct you to the AWS region in which you would have the lowest latency.</span>

Which of the following AWS resources would you use in order for an EC2-VPC instance to resolve DNS names outside of AWS? 1. A VPC peering connection 2. A DHCP option set 3. A routing rule 4. An IGW

2 <strong>B.</strong><br>A DHCP option set allows customers to define DNS servers for DNS name resolution, establish domain names for instances within an Amazon VPC, define NTP servers, and define the NetBIOS name servers.

What is the deployment term for an environment that extends an existing on-premises infrastructure into the cloud to connect cloud resources to internal systems? 1. All-in deployment 2. Hybrid deployment 3. On-premises deployment 4. Scatter deployment

2 <strong>B.</strong><br>A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. An all-in deployment refers to an environment that exclusively runs in the cloud. An on-premises deployment refers to an environment that runs exclusively in an organization's data center.

Which of the following Amazon Virtual Private Cloud (Amazon VPC) elements acts as a stateless firewall? 1. Security group 2. Network Access Control List (ACL) 3. Network Address Translation (NAT) instance 4. An Amazon VPC endpoint

2 <strong>B.</strong><br>A network ACL is an optional layer of security for your Amazon VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your Amazon VPC.

Which of the following best describes the risk and compliance communication responsibilities of customers to AWS? 1. AWS and customers both communicate their security and control environment information to each other at all times. 2. AWS publishes information about the AWS security and control practices online, and directly to customers under NDA. Customers do not need to communicate their use and configurations to AWS. 3. Customers communicate their use and configurations to AWS at all times. AWS does not communicate AWS security and control practices to customers for security reasons. 4. Both customers and AWS keep their security and control practices entirely confidential and do not share them in order to ensure the greatest security for all parties.

2 <strong>B.</strong><br>AWS publishes information publicly online and directly to customers under NDA, but customers are not required to share their use and configuration information with AWS, therefore answer B is correct.

Which is an operational process performed by AWS for data security? 1. Advanced Encryption Standard (AES)-256 encryption of data stored on any shared storage device 2. Decommissioning of storage devices using industry-standard practices 3. Background virus scans of Amazon Elastic Block Store (Amazon EBS) volumes and Amazon EBS snapshots 4. Replication of data across multiple AWS regions 5. Secure wiping of Amazon EBS data when an Amazon EBS volume is unmounted

2 <strong>B.</strong><br>All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

You are running a suite of microservices on AWS Lambda that provide the business logic and access to data stored in Amazon DynamoDB for your task management system. You need to create well-defined RESTful Application Program Interfaces (APIs) for these microservices that will scale with traffic to support a new mobile application. What AWS Cloud service can you use to create the necessary RESTful APIs? 1. Amazon Kinesis 2. Amazon API Gateway 3. Amazon Cognito 4. Amazon Elastic Compute Cloud (Amazon EC2) Container Registry

2 <strong>B.</strong><br>Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a &ldquo;front door&rdquo; for applications to access data, business logic, or functionality from your code running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

You are building a media-sharing web application that serves video files to end users on both PCs and mobile devices. The media files are stored as objects in an Amazon Simple Storage Service (Amazon S3) bucket, but are to be delivered through Amazon CloudFront. What is the simplest way to ensure that only Amazon CloudFront has access to the objects in the Amazon S3 bucket? 1. Create Signed URLs for each Amazon S3 object. 2. <p class="Option"><span lang="EN-US">Use an Amazon CloudFront Origin Access Identifier (OAI).</span> 3. <p class="Option"><span lang="EN-US">Use public and private keys with signed cookies.</span> 4. <p class="Option"><span lang="EN-US">Use an AWS Identity and Access Management (IAM) bucket policy.</span>

2 <strong>B.</strong><br>Amazon CloudFront OAI is a special identity that can be used to restrict access to an Amazon S3 bucket only to an Amazon CloudFront distribution. Signed URLs, signed cookies, and IAM bucket policies can help to protect content served through Amazon CloudFront, but OAIs are the simplest way to ensure that only Amazon CloudFront has access to a bucket.

Which of the following AWS Cloud services is a fully managed NoSQL database service? 1. Amazon Simple Queue Service (Amazon SQS) 2. Amazon DynamoDB 3. Amazon ElastiCache 4. Amazon Relational Database Service (Amazon RDS)

2 <strong>B.</strong><br>Amazon DynamoDB is a fully managed, fast, and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Amazon SQS, Amazon ElastiCache, and Amazon RDS do not provide a NoSQL database service. Amazon SQS is a managed message queuing service. Amazon ElastiCache is a service that provides in-memory cache in the cloud. Finally, Amazon RDS provides managed relational databases.

Government regulations require that your company maintain all correspondence for a period of seven years for compliance reasons. What is the best storage mechanism to keep this data secure in a cost-effective manner? 1. Amazon S3 2. Amazon Glacier 3. Amazon EBS 4. Amazon EFS

2 <strong>B.</strong><br>Amazon Glacier enables businesses and organizations to retain data for months, years, or decades, easily and cost effectively. With Amazon Glacier, customers can retain more of their data for future analysis or reference, and they can focus on their business instead of operating and maintaining their storage infrastructure. Customers can also use Amazon Glacier Vault Lock to meet regulatory and compliance archiving requirements.

Your company has 50,000 weather stations around the country that send updates every 2 seconds. What service will enable you to ingest this stream of data and store it to Amazon Simple Storage Service (Amazon S3) for future processing? 1. Amazon Simple Queue Service (Amazon SQS) 2. Amazon Kinesis Firehose 3. Amazon Elastic Compute Cloud (Amazon EC2) 4. Amazon Data Pipeline

2 <strong>B.</strong><br>Amazon Kinesis Firehose allows you to ingest massive streams of data and store the data on Amazon S3 (as well as Amazon Redshift and Amazon Elasticsearch).

Which AWS database service is best suited for traditional Online Transaction Processing (OLTP)? 1. Amazon Redshift 2. Amazon Relational Database Service (Amazon RDS) 3. Amazon Glacier 4. Elastic Database

2 <strong>B.</strong><br>Amazon RDS is best suited for traditional OLTP transactions. Amazon Redshift, on the other hand, is designed for OLAP workloads. Amazon Glacier is designed for cold archival storage.

Your company has 100TB of financial records that need to be stored for seven years by law. Experience has shown that any record more than one-year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost efficient manner? 1. Store the data on Amazon Elastic Block Store (Amazon EBS) volumes attached to t2.micro instances. 2. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year and delete the object after seven years. 3. Store the data in Amazon DynamoDB and run daily script to delete data older than seven years. 4. Store the data in Amazon Elastic MapReduce (Amazon EMR).

2 <strong>B.</strong><br>Amazon S3 is the most cost effective storage on AWS, and lifecycle policies are a simple and effective feature to address the business requirements.

Your company provides transcoding services for amateur producers to format their short films to a variety of video formats. Which service provides the best option for storing the videos? 1. Amazon Glacier 2. Amazon Simple Storage Service (Amazon S3) 3. Amazon Relational Database Service (Amazon RDS) 4. AWS Storage Gateway

2 <strong>B.</strong><br>Amazon S3 provides highly durable and available storage for a variety of content. Amazon S3 can be used as a big data object store for all of the videos. Amazon S3&rsquo;s low cost combined with its design for durability of 99.999999999% and for up to 99.99% availability make it a great storage choice for transcoding services.

You have a popular web application that accesses data stored in an Amazon Simple Storage Service (Amazon S3) bucket. You expect the access to be very read-intensive, with expected request rates of up to 500 GETs per second from many clients. How can you increase the performance and scalability of Amazon S3 in this case? 1. Turn on cross-region replication to ensure that data is served from multiple locations. 2. Ensure randomness in the namespace by including a hash prefix to key names. 3. Turn on server access logging. 4. Ensure that key names are sequential to enable pre-fetch.

2 <strong>B.</strong><br>Amazon S3 scales automatically, but for request rates over 100 GETS per second, it helps to make sure there is some randomness in the key space. Replication and logging will not affect performance or scalability. Using sequential key names could have a negative effect on performance or scalability.

A media sharing application is producing a very high volume of data in a very short period of time. Your back-end services are unable to manage the large volume of transactions. What option provides a way to manage the flow of transactions to your back-end services? 1. Store the inbound transactions in an Amazon Relational Database Service (Amazon RDS) instance so that your back-end services can retrieve them as time permits. 2. Use an Amazon Simple Queue Service (Amazon SQS) queue to buffer the inbound transactions. 3. Use an Amazon Simple Notification Service (Amazon SNS) topic to buffer the inbound transactions. 4. Store the inbound transactions in an Amazon Elastic MapReduce (Amazon EMR) cluster so that your back-end services can retrieve them as time permits.

2 <strong>B.</strong><br>Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS should be used to decouple the large volume of inbound transactions, allowing the back-end services to manage the level of throughput without losing messages.

Your company provides a mobile voting application for a popular TV show, and 5 to 25 million viewers all vote in a 15-second timespan. What mechanism can you use to decouple the voting application from your back-end services that tally the votes? 1. AWS CloudTrail 2. Amazon Simple Queue Service (Amazon SQS) 3. Amazon Redshift 4. Amazon Simple Notification Service (Amazon SNS)

2 <strong>B.</strong><br>Amazon SQS is a fast, reliable, scalable, fully managed message queuing service that allows organizations to decouple the components of a cloud application. With Amazon SQS, organizations can transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. AWS CloudTrail records AWS API calls, and Amazon Redshift is a data warehouse, neither of which would be useful as an architecture component for decoupling components. Amazon SNS provides a messaging bus complement to Amazon SQS; however, it doesn't provide the decoupling of components necessary for this scenario.

You are designing a new application, and you need to ensure that the components of your application are not tightly coupled. You are trying to decide between the different AWS cloud services to use to achieve this goal. Your requirements are that messages between your application components may not be delivered more than once, tasks must be completed in either a synchronous or asynchronous fashion, and there must be some form of application logic that decides what do when tasks have been completed. What application service should you use? 1. Amazon Simple Queue Service (Amazon SQS) 2. Amazon Simple Workflow Service (Amazon SWF) 3. Amazon Simple Storage Service (Amazon S3) 4. Amazon Simple Email Service (Amazon SES)

2 <strong>B.</strong><br>Amazon SWF would best serve your purpose in this scenario because it helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

Which of the following statements best describes an availability zone? 1. Each availability zone consists of a single discrete data center with redundant power and networking/connectivity. 2. Each availability zone consists of multiple discrete data centers with redundant power and networking/connectivity. 3. Each availability zone consists of multiple discrete regions, each with a single data center with redundant power and networking/connectivity. 4. Each availability zone consists of multiple discrete data centers with shared power and redundant networking/connectivity.

2 <strong>B.</strong><br>An availability zone consists of multiple discrete data centers, each with their own redundant power and networking/connectivity, therefore answer B is correct.

Which Amazon VPC feature allows you to create a dual-homed instance? 1. EIP address 2. ENI 3. Security groups 4. CGW

2 <strong>B.</strong><br>Attaching an ENI associated with a different subnet to an instance can make the instance dual-homed.

For an application running in the ap-northeast-1 region with three Availability Zones (ap-northeast-1a, ap-northeast-1b, and ap-northeast-1c), which instance deployment provides high availability for the application that normally requires nine running Amazon Elastic Compute Cloud (Amazon EC2) instances but can run on a minimum of 65 percent capacity while Auto Scaling launches replacement instances in the remaining Availability Zones? 1. Deploy the application on four servers in ap-northeast-1a and five servers in ap-northeast-1b, and keep five stopped instances in ap-northeast-1a as reserve. 2. Deploy the application on three servers in ap-northeast-1a, three servers in ap-northeast-1b, and three servers in ap-northeast-1c. 3. Deploy the application on six servers in ap-northeast-1b and three servers in ap-northeast-1c. 4. Deploy the application on nine servers in ap-northeast-1b, and keep nine stopped instances in ap-northeast-1a as reserve.

2 <strong>B.</strong><br>Auto Scaling will provide high availability across three Availability Zones with three Amazon EC2 instances in each and keep capacity above the required minimum capacity, even in the event of an entire Availability Zone becoming unavailable.

Your company has 30 years of financial records that take up 15TB of on-premises storage. It is regulated that you maintain these records, but in the year you have worked for the company no one has ever requested any of this data. Given that the company data center is already filling the bandwidth of its Internet connection, what is an alternative way to store the data on the most appropriate cloud storage? 1. AWS Import/Export to Amazon Simple Storage Service (Amazon S3) 2. AWS Import/Export to Amazon Glacier 3. Amazon Kinesis 4. Amazon Elastic MapReduce (AWS EMR)

2 <strong>B.</strong><br>Because the Internet connection is full, the best solution will be based on using AWS Import/Export to ship the data. The most appropriate storage location for data that must be stored, but is very rarely accessed, is Amazon Glacier.

You have been using Amazon Relational Database Service (Amazon RDS) for the last year to run an important application with automated backups enabled. One of your team members is performing routine maintenance and accidentally drops an important table, causing an outage. How can you recover the missing data while minimizing the duration of the outage? 1. Perform an undo operation and recover the table. 2. Restore the database from a recent automated DB snapshot. 3. Restore only the dropped table from the DB snapshot. 4. The data cannot be recovered.

2 <strong>B.</strong><br>DB Snapshots can be used to restore a complete copy of the database at a specific point in time. Individual tables cannot be extracted from a snapshot.

Which of the following is the name of the feature within Amazon Virtual Private Cloud (Amazon VPC) that allows you to launch Amazon Elastic Compute Cloud (Amazon EC2) instances on hardware dedicated to a single customer? 1. Amazon VPC-based tenancy 2. Dedicated tenancy 3. Default tenancy 4. Host-based tenancy

2 <strong>B.</strong><br>Dedicated instances are physically isolated at the host hardware level from your instances that aren&rsquo;t dedicated instances and from instances that belong to other AWS accounts.

You are building the database tier for an enterprise application that gets occasional activity throughout the day. Which storage type should you select as your default option? 1. Magnetic storage 2. General Purpose Solid State Drive (SSD) 3. Provisioned IOPS (SSD) 4. Storage Area Network (SAN)-attached

2 <strong>B.</strong><br>General Purpose (SSD) volumes are generally the right choice for databases that have bursts of activity.

You have launched an Amazon Linux Elastic Compute Cloud (Amazon EC2) instance into EC2-Classic, and the instance has successfully passed the System Status Check and Instance Status Check. You attempt to securely connect to the instance via Secure Shell (SSH) and receive the response, &ldquo;WARNING: UNPROTECTED PRIVATE KEY FILE,&rdquo; after which the login fails. Which of the following is the cause of the failed login? 1. You are using the wrong private key. 2. The permissions for the private key are too insecure for the key to be trusted. 3. A security group rule is blocking the connection. 4. A security group rule has not been associated with the private key.

2 <strong>B.</strong><br>If your private key can be read or written to by anyone but you, then SSH ignores your key.

You are responsible for your company&rsquo;s AWS resources, and you notice a significant amount of traffic from an IP address in a foreign country in which your company does not have customers. Further investigation of the traffic indicates the source of the traffic is scanning for open ports on your EC2-VPC instances. Which one of the following resources can deny the traffic from reaching the instances? 1. Security group 2. Network ACL 3. NAT instance 4. An Amazon VPC endpoint

2 <strong>B.</strong><br>Network ACL rules can deny traffic.

You are designing an e-commerce web application that will scale to potentially hundreds of thousands of concurrent users. Which database technology is best suited to hold the session state for large numbers of concurrent users? 1. Relational database using Amazon Relational Database Service (Amazon RDS) 2. NoSQL database table using Amazon DynamoDB 3. Data warehouse using Amazon Redshift 4. Amazon Simple Storage Service (Amazon S3)

2 <strong>B.</strong><br>NoSQL databases like Amazon DynamoDB excel at scaling to hundreds of thousands of requests with key/value access to user profile and session.

You have valuable media files hosted on AWS and want them to be served only to authenticated users of your web application. You are concerned that your content could be stolen and distributed for free. How can you protect your content? 1. Use static web hosting. 2. Generate pre-signed URLs for content in the web application. 3. Use AWS Identity and Access Management (IAM) policies to restrict access. 4. Use logging to track your content.

2 <strong>B.</strong><br>Pre-signed URLs allow you to grant time-limited permission to download objects from an Amazon Simple Storage Service (Amazon S3) bucket. Static web hosting generally requires world-read access to all content. AWS IAM policies do not know who the authenticated users of the web app are. Logging can help track content loss, but not prevent it.

Which Amazon Relational Database Service (Amazon RDS) database engines support read replicas? 1. Microsoft SQL Server and Oracle 2. MySQL, MariaDB, PostgreSQL, and Aurora 3. Aurora, Microsoft SQL Server, and Oracle 4. MySQL and PostgreSQL

2 <strong>B.</strong><br>Read replicas are supported by MySQL, MariaDB, PostgreSQL, and Aurora.

Which DNS records are commonly used to stop email spoofing and spam? 1. MX records 2. <p class="Option"><span lang="EN-US">SPF records</span> 3. A records 4. C names

2 <strong>B.</strong><br>SPF records are used to verify authorized senders of mail from your domain.

What aspect of an Amazon VPC is stateful? 1. Network ACLs 2. Security groups 3. Amazon DynamoDB 4. Amazon S3

2 <strong>B.</strong><br>Security groups are stateful, whereas network ACLs are stateless.

Your order-processing application processes orders extracted from a queue with two Reserved Instances processing 10 orders/minute. If an order fails during processing, then it is returned to the queue without penalty. Due to a weekend sale, the queues have several hundred orders backed up. While the backup is not catastrophic, you would like to drain it so that customers get their confirmation emails faster. What is a cost-effective way to drain the queue for orders? 1. Create more queues. 2. Deploy additional Spot Instances to assist in processing the orders. 3. Deploy additional Reserved Instances to assist in processing the orders. 4. Deploy additional On-Demand Instances to assist in processing the orders.

2 <strong>B.</strong><br>Spot Instances are a very cost-effective way to address temporary compute needs that are not urgent and are tolerant of interruption. That&rsquo;s exactly the workload described here. Reserved Instances are inappropriate for temporary workloads. On-Demand Instances are good for temporary workloads, but don&rsquo;t offer the cost savings of Spot Instances. Adding more queues is a non-responsive answer as it would not address the problem.

Which type of DNS record should you use to resolve a domain name to another domain name? 1. An A record 2. <p class="Option"><span lang="EN-US">A CNAME record</span> 3. An SPF record 4. A PTR record

2 <strong>B.</strong><br>The CNAME record maps a name to another name. It should be used only when there are no other records on that name.

The AWS control environment is in place for the secure delivery of AWS Cloud service offerings. Which of the following does the collective control environment NOT explicitly include? 1. People 2. Energy 3. Technology 4. Processes

2 <strong>B.</strong><br>The collective control environment includes people, processes, and technology necessary to establish and maintain an environment that supports the operating effectiveness of AWS control framework. Energy is not a discretely identified part of the control environment, therefore B is the correct answer.

What is the default message retention period for Amazon Simple Queue Service (Amazon SQS)? 1. 30 minutes 2. 4 days 3. 30 seconds 4. <p class="Option"><span lang="EN-US">14 days</span>

2 <strong>B.</strong><br>The default message retention period that can be set in Amazon SQS is four days.

You have created an Elastic Load Balancing load balancer listening on port 80, and you registered it with a single Amazon Elastic Compute Cloud (Amazon EC2) instance also listening on port 80. A client makes a request to the load balancer with the correct protocol and port for the load balancer. In this scenario, how many connections does the balancer maintain? 1. 1 2. 2 3. 3 4. 4

2 <strong>B.</strong><br>The load balancer maintains two separate connections: one connection with the client and one connection with the Amazon EC2 instance.

What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) long polling timeout? 1. 10 seconds 2. 20 seconds 3. 30 seconds 4. 1 hour

2 <strong>B.</strong><br>The maximum time for an Amazon SQS long polling timeout is 20 seconds.

Which of the following is the name of the security model employed by AWS with its customers? 1. The shared secret model 2. The shared responsibility model 3. The shared secret key model 4. The secret key responsibility model

2 <strong>B.</strong><br>The shared responsibility model is the name of the model employed by AWS with its customers.

You are restoring an Amazon Elastic Block Store (Amazon EBS) volume from a snapshot. How long will it be before the data is available? 1. It depends on the provisioned size of the volume. 2. The data will be available immediately. 3. It depends on the amount of data stored on the volume. 4. It depends on whether the attached instance is an Amazon EBS-optimized instance.

2 <strong>B.</strong><br>The volume is created immediately but the data is loaded lazily. This means that the volume can be accessed upon creation, and if the data being requested has not yet been restored, it will be restored upon first request.

How are you billed for elastic IP addresses? 1. Hourly when they are associated with an instance 2. Hourly when they are not associated with an instance 3. Based on the data that flows through them 4. Based on the instance type to which they are attached

2 <strong>B.</strong><br>There is a very small hourly charge for allocated elastic IP addresses that are not associated with an instance.

You need to take a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume. How long will the volume be unavailable? 1. It depends on the provisioned size of the volume. 2. The volume will be available immediately. 3. It depends on the amount of data stored on the volume. 4. It depends on whether the attached instance is an Amazon EBS-optimized instance.

2 <strong>B.</strong><br>There is no delay in processing when commencing a snapshot.

You are a solutions architect who is working for a mobile application company that wants to use Amazon Simple Workflow Service (Amazon SWF) for their new takeout ordering application. They will have multiple workflows that will need to interact. What should you advise them to do in structuring the design of their Amazon SWF environment? 1. Use multiple domains, each containing a single workflow, and design the workflows to interact across the different domains. 2. Use a single domain containing multiple workflows. In this manner, the workflows will be able to interact. 3. Use a single domain with a single workflow and collapse all activities to within this single workflow. 4. Workflows cannot interact with each other; they would be better off using Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) for their application.

2 <strong>B.</strong><br>Use a single domain with multiple workflows. Workflows within separate domains cannot interact.

You are rolling out A and B test versions of a web application to see which version results in the most sales. You need 10 percent of your traffic to go to version A, 10 percent to go to version B, and the rest to go to your current production version. Which routing policy should you choose to achieve this? 1. <p class="Option"><span lang="EN-US">Simple routing</span> 2. Weighted routing 3. Geolocation routing 4. Failover routing

2 <strong>B.</strong><br>Weighted routing would best achieve this objective because it allows you to specify which percentage of traffic is directed to each endpoint.

You are responsible for the application logging solution for your company&rsquo;s existing applications running on multiple Amazon EC2 instances. Which of the following is the best approach for aggregating the application logs within AWS? 1. Amazon CloudWatch custom metrics 2. Amazon CloudWatch Logs Agent 3. An Elastic Load Balancing listener 4. An internal Elastic Load Balancing load balancer

2 <strong>B.</strong><br>You can use the Amazon CloudWatch Logs Agent installer on existing Amazon EC2 instances to install and configure the CloudWatch Logs Agent.

<p class="Question"><span lang="EN-US">You are trying to decrypt ciphertext with AWS KMS and the decryption operation is failing. Which of the following are possible causes? (Choose 2 answers)<o:p></o:p></span> 1. <p class="Option"><span lang="EN-US">The private key does not match the public key in the ciphertext.<o:p></o:p></span> 2. <p class="Option"><span lang="EN-US">The plaintext was encrypted along with an encryption context, and you are not providing the identical encryption context when calling the Decrypt API.<o:p></o:p></span> 3. <p class="Option"><span lang="EN-US">The ciphertext you are trying to decrypt is not valid.<o:p></o:p></span> 4. <p class="Option"><span lang="EN-US">You are not providing the correct symmetric key to the Decrypt API.<o:p></o:p></span>

2,3 <strong>B, C.</strong><br>Encryption context is a set of key/value pairs that you can pass to AWS KMS when you call the Encrypt, Decrypt, ReEncrypt, GenerateDataKey, and GenerateDataKeyWithoutPlaintext APIs. Although the encryption context is not included in the ciphertext, it is cryptographically bound to the ciphertext during encryption and must be passed again when you call the Decrypt (or ReEncrypt) API. Invalid ciphertext for decryption is plaintext that has been encrypted in a different AWS account or ciphertext that has been altered since it was originally encrypted.

You want to grant the individuals on your network team the ability to fully manipulate Amazon EC2 instances. Which of the following accomplish this goal? (Choose 2 answers) 1. Create a new policy allowing EC2:* actions, and name the policy <strong>NetworkTeam</strong>. 2. Assign the managed policy, EC2FullAccess, to a group named NetworkTeam, and assign all the team members&rsquo; IAM user accounts to that group. 3. Create a new policy that grants EC2:* actions on all resources, and assign that policy to each individual&rsquo;s IAM user account on the network team. 4. Create a NetworkTeam IAM group, and have each team member log in to the AWS Management Console using the user name/password for the group.

2,3 <strong>B,C.</strong><br>Access requires an appropriate policy associated with a principal. Response A is merely a policy with no principal, and response D is not a principal as IAM groups do not have user names and passwords. Response B is the best solution; response C will also work but it is much harder to manage.

Which of the following are benefits of using Amazon EC2 roles? (Choose 2 answers) 1. No policies are required. 2. Credentials do not need to be stored on the Amazon EC2 instance. 3. Key rotation is not necessary. 4. Integration with Active Directory is automatic.

2,3 <strong>B,C.</strong><br>Amazon EC2 roles must still be assigned a policy. Integration with Active Directory involves integration between Active Directory and IAM via SAML.

An application currently uses Memcached to cache frequently used database queries. Which steps are required to migrate the application to use Amazon ElastiCache with minimal changes? (Choose 2 answers) 1. Recompile the application to use the Amazon ElastiCache libraries. 2. Update the configuration file with the endpoint for the Amazon ElastiCache cluster. 3. Configure a security group to allow access from the application servers. 4. Connect to the Amazon ElastiCache nodes using Secure Shell (SSH) and install the latest version of Memcached.

2,3 <strong>B,C.</strong><br>Amazon ElastiCache is Application Programming Interface (API)-compatible with existing Memcached clients and does not require the application to be recompiled or linked against the libraries. Amazon ElastiCache manages the deployment of the Amazon ElastiCache binaries.

How can you back up data stored in Amazon ElastiCache running Redis? (Choose 2 answers) 1. Create an image of the Amazon Elastic Compute Cloud (Amazon EC2) instance. 2. Configure automatic snapshots to back up the cache environment every night. 3. Create a snapshot manually. 4. Redis clusters cannot be backed up.

2,3 <strong>B,C.</strong><br>Amazon ElastiCache with the Redis engine allows for both manual and automatic snapshots. Memcached does not have a backup function.

Which of the following workloads are a good fit for running on Amazon Redshift? (Choose 2 answers) 1. Transactional database supporting a busy e-commerce order processing website 2. Reporting database supporting back-office analytics 3. Data warehouse used to aggregate multiple disparate data sources 4. Manage session state and user profile data for thousands of concurrent users

2,3 <strong>B,C.</strong><br>Amazon Redshift is an Online Analytical Processing (OLAP) data warehouse designed for analytics, Extract, Transform, Load (ETL), and high-speed querying. It is not well suited for running transactional applications that require high volumes of small inserts or updates.

Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what kinds of operations is it possible to get stale data as a result of eventual consistency? (Choose 2 answers) 1. GET after PUT of a new object 2. GET or LIST after a DELETE 3. GET after overwrite PUT (PUT to an existing key) 4. DELETE after PUT of new object

2,3 <strong>B,C.</strong><br>Amazon S3 provides read-after-write consistency for PUTs to new objects (new key), but eventual consistency for GETs and DELETEs of existing objects (existing key).

What are some reasons to enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers) 1. You want a backup of your data in case of accidental deletion. 2. You have a set of users or customers who can access the second bucket with lower latency. 3. For compliance reasons, you need to store data in a location at least 300 miles away from the first region. 4. Your data needs at least five nines of durability.

2,3 <strong>B,C.</strong><br>Cross-region replication can help lower latency and satisfy compliance requirements on distance. Amazon S3 is designed for eleven nines durability for objects in a single region, so a second region does not significantly increase durability. Cross-region replication does not protect against accidental deletion.

Which of the following are true of instance stores? (Choose 2 answers) 1. Automatic backups 2. Data is lost when the instance stops. 3. Very high IOPS 4. Charge is based on the total amount of storage provisioned.

2,3 <strong>B,C.</strong><br>Instance stores are low-durability, high-IOPS storage that is included for free with the hourly cost of an instance.

Which of the following options will help increase the availability of a web server farm? (Choose 2 answers) 1. Use Amazon CloudFront to deliver content to the end users with low latency and high data transfer speeds. 2. Launch the web server instances across multiple Availability Zones. 3. Leverage Auto Scaling to recover from failed instances. 4. Deploy the instances in an Amazon Virtual Private Cloud (Amazon VPC). 5. Add more CPU and RAM to each instance.

2,3 <strong>B,C.</strong><br>Launching instances across multiple Availability Zones helps ensure the application is isolated from failures in a single Availability Zone, allowing the application to achieve higher availability. Whether you are running one Amazon EC2 instance or thousands, you can use Auto Scaling to detect impaired Amazon EC2 instances and unhealthy applications and replace the instances without your intervention. This ensures that your application is getting the compute capacity that you expect, thereby maintaining your availability.

Which of the following methods will allow an application using an AWS SDK to be authenticated as a principal to access AWS Cloud services? (Choose 2 answers) 1. Create an IAM user and store the user name and password for the user in the application&rsquo;s configuration. 2. Create an IAM user and store both parts of the access key for the user in the application&rsquo;s configuration. 3. Run the application on an Amazon EC2 instance with an assigned IAM role. 4. Make all the API calls over an SSL connection.

2,3 <strong>B,C.</strong><br>Programmatic access is authenticated with an access key, not with user names/passwords. IAM roles provide a temporary security token to an application using an SDK.

VM Import/Export can import existing virtual machines as: (Choose 2 answers) 1. Amazon Elastic Block Store (Amazon EBS) volumes 2. Amazon Elastic Compute Cloud (Amazon EC2) instances 3. Amazon Machine Images (AMIs) 4. Security groups

2,3 <strong>B,C.</strong><br>These are the possible outputs of VM Import/Export.

How can you secure an Amazon ElastiCache cluster? (Choose 3 answers) 1. Change the Memcached root password. 2. Restrict Application Programming Interface (API) actions using AWS Identity and Access Management (IAM) policies. 3. Restrict network access using security groups. 4. Restrict network access using a network Access Control List (ACL).

2,3,4 <strong>B,C,D.</strong><br>Limit access at the network level using security groups or network ACLs, and limit infrastructure changes using IAM.

What origin servers are supported by Amazon CloudFront? (Choose 3 answers) 1. An Amazon Route 53 Hosted Zone 2. <p class="Option"><span lang="EN-US">An Amazon Simple Storage Service (Amazon S3) bucket</span> 3. An HTTP server running on Amazon Elastic Compute Cloud (Amazon EC2) 4. An Amazon EC2 Auto Scaling Group 5. <p class="Option"><span lang="EN-US">An HTTP server running on-premises</span>

2,3,5 <strong>B, C, E.</strong><br>Amazon CloudFront can use an Amazon S3 bucket or any HTTP server, whether or not it is running in Amazon EC2. A Route 53 Hosted Zone is a set of DNS resource records, while an Auto Scaling Group launches or terminates Amazon EC2 instances automatically. Neither can be specified as an origin server for a distribution.

Amazon Simple Storage Service (S3) bucket policies can restrict access to an Amazon S3 bucket and objects by which of the following? (Choose 3 answers) 1. Company name 2. IP address range 3. AWS account 4. Country of origin 5. Objects with a specific prefix

2,3,5 <strong>B,C,E.</strong><br>Amazon S3 bucket policies cannot specify a company name or a country or origin, but they can specify request IP range, AWS account, and a prefix for objects that can be accessed.

Which features can be used to restrict access to Amazon Simple Storage Service (Amazon S3) data? (Choose 3 answers) 1. Enable static website hosting on the bucket. 2. Create a pre-signed URL for an object. 3. Use an Amazon S3 Access Control List (ACL) on a bucket or object. 4. Use a lifecycle policy. 5. Use an Amazon S3 bucket policy.

2,3,5 <strong>B,C,E.</strong><br>Static website hosting does not restrict data access, and neither does an Amazon S3 lifecycle policy.

Which of the following are best practices for managing AWS Identity and Access Management (IAM) user access keys? (Choose 3 answers) 1. Embed access keys directly into application code. 2. Use different access keys for different applications. 3. Rotate access keys periodically. 4. Keep unused access keys for an indefinite period of time. 5. Configure Multi-Factor Authentication (MFA) for your most sensitive operations.

2,3,5 <strong>B,C,E.</strong><br>You should protect AWS user access keys like you would your credit card numbers or any other sensitive secret. Use different access keys for different applications so that you can isolate the permissions and revoke the access keys for individual applications if an access key is exposed. Remember to change access keys on a regular basis. For increased security, it is recommended to configure MFA for any sensitive operations. Remember to remove any IAM users that are no longer needed so that the user's access to your resources is removed. Always avoid having to embed access keys in an application.

When an Amazon Elastic Compute Cloud (Amazon EC2) instance registered with an Elastic Load Balancing load balancer using connection draining is deregistered or unhealthy, which of the following will happen? (Choose 2 answers) 1. Immediately close all existing connections to that instance. 2. Keep the connections open to that instance, and attempt to complete in-flight requests. 3. Redirect the requests to a user-defined error page like "Oops this is embarrassing" or "Under Construction." 4. Forcibly close all connections to that instance after a timeout period. 5. Leave the connections open as long as the load balancer is running.

2,4 <strong>B,C.</strong><br>When connection draining is enabled, the load balancer will stop sending requests to a deregistered or unhealthy instance and attempt to complete in-flight requests until a connection draining timeout period is reached, which is 300 seconds by default.

DynamoDB tables may contain sensitive data that needs to be protected. Which of the following is a way for you to protect DynamoDB table content? (Choose 2 answers) 1. DynamoDB encrypts all data server-side by default so nothing is required. 2. DynamoDB can store data encrypted with a client-side encryption library solution before storing the data in DynamoDB. 3. DynamoDB obfuscates all data stored so encryption is not required 4. DynamoDB can be used with the AWS Key Management Service to encrypt the data before storing the data in DynamoDB. 5. DynamoDB should not be used to store sensitive information requiring protection.

2,4 <strong>B,D.</strong><br>Amazon DynamoDB does not have a server-side feature to encrypt items within a table. You need to use a solution outside of DynamoDB such as a client-side library to encrypt items before storing them, or a key management service like AWS Key Management Service to manage keys that are used to encrypt items before storing them in DynamoDB.

Which of the following are not appropriates use cases for Amazon Simple Storage Service (Amazon S3)? (Choose 2 answers) 1. Storing web content 2. Storing a file system mounted to an Amazon Elastic Compute Cloud (Amazon EC2) instance 3. Storing backups for a relational database 4. Primary storage for a database 5. Storing logs for analytics

2,4 <strong>B,D.</strong><br>Amazon S3 cannot be mounted to an Amazon EC2 instance like a file system and should not serve as primary database storage.

Which of the following actions can be authorized by IAM? (Choose 2 answers) 1. Installing ASP.NET on a Windows Server 2. Launching an Amazon Linux EC2 instance 3. Querying an Oracle database 4. Adding a message to an Amazon Simple Queue Service (Amazon SQS) queue

2,4 <strong>B,D.</strong><br>IAM controls access to AWS resources only. Installing ASP.NET will require Windows operating system authorization, and querying an Oracle database will require Oracle authorization.

Which of the following can be used to address an Amazon Elastic Compute Cloud (Amazon EC2) instance over the web? (Choose 2 answers) 1. Windows machine name 2. Public DNS name 3. Amazon EC2 instance ID 4. Elastic IP address

2,4 <strong>B,D.</strong><br>Neither the Windows machine name nor the Amazon EC2 instance ID can be resolved into an IP address to access the instance.

Which of the following options are valid properties of an Amazon Simple Queue Service (Amazon SQS) message? (Choose 2 answers) 1. Destination 2. Message ID 3. Type 4. Body

2,4 <strong>B,D.</strong><br>The valid properties of an SQS message are Message ID and Body. Each message receives a system-assigned Message ID that Amazon SQS returns to you in the SendMessage response. The Message Body is composed of name/value pairs and the unstructured, uninterpreted content.

Which of the following statements about Amazon DynamoDB tables are true? (Choose 2 answers) 1. Global secondary indexes can only be created when the table is being created. 2. Local secondary indexes can only be created when the table is being created. 3. You can only have one global secondary index. 4. You can only have one local secondary index.

2,4 <strong>B,D.</strong><br>You can only have a single local secondary index, and it must be created at the same time the table is created. You can create many global secondary indexes after the table has been created.

Auto Scaling supports which of the following plans for Auto Scaling groups? (Choose 3 answers) 1. Predictive 2. Manual 3. Preemptive 4. Scheduled 5. Dynamic 6. End-user request driven

2,4,5 <strong>B,D,E.</strong><br>Auto Scaling supports maintaining the current size of an Auto Scaling group using four plans: maintain current levels, manual scaling, scheduled scaling, and dynamic scaling.

Your compliance department has mandated a new requirement that all data on Amazon Elastic Block Storage (Amazon EBS) volumes must be encrypted. Which of the following steps would you follow for your existing Amazon EBS volumes to comply with the new requirement? (Choose 3 answers) 1. Move the existing Amazon EBS volume into an Amazon Virtual Private Cloud (Amazon VPC). 2. Create a new Amazon EBS volume with encryption enabled. 3. Modify the existing Amazon EBS volume properties to enable encryption. 4. Attach an Amazon EBS volume with encryption enabled to the instance that hosts the data, then migrate the data to the encryption-enabled Amazon EBS volume. 5. Copy the data from the unencrypted Amazon EBS volume to the Amazon EBS volume with encryption enabled.

2,4,5 <strong>B,D,E.</strong><br>There is no direct way to encrypt an existing unencrypted volume. However, you can migrate data between encrypted and unencrypted volumes.

When designing a loosely coupled system, which AWS services provide an intermediate durable storage layer between components? (Choose 2 answers) 1. Amazon CloudFront 2. Amazon Kinesis 3. Amazon Route 53 4. AWS CloudFormation 5. Amazon Simple Queue Service (Amazon SQS)

2,5 <strong>B,E.</strong><br>Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data. Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS makes it simple and cost-effective to decouple the components of a cloud application.

Which of the following will occur when an Amazon Elastic Block Store (Amazon EBS)-backed Amazon EC2 instance in an Amazon VPC with an associated EIP is stopped and started? (Choose two) 1. The EIP will be dissociated from the instance. 2. All data on instance-store devices will be lost. 3. All data on Amazon EBS devices will be lost. 4. The ENI is detached. 5. The underlying host for the instance is changed.

2,5 <strong>B,E.</strong><br>In the EC2-Classic network, the EIP will be disassociated with the instance; in the EC2-VPC network, the EIP remains associated with the instance. Regardless of the underlying network, a stop/start of an Amazon EBS-backed Amazon EC2 instance always changes the host computer.

Which of the following are characteristics of the Auto Scaling service on AWS? (Choose 3 answers) 1. Sends traffic to healthy instances 2. Responds to changing conditions by adding or terminating Amazon Elastic Compute Cloud (Amazon EC2) instances 3. Collects and tracks metrics and sets alarms 4. Delivers push notifications 5. Launches instances from a specified Amazon Machine Image (AMI) 6. Enforces a minimum number of running Amazon EC2 instances

2,5,6 <strong>B,E,F.</strong><br>Auto Scaling responds to changing conditions by adding or terminating instances, launches instances from an AMI specified in the launch configuration associated with the Auto Scaling group, and enforces a minimum number of instances in the min-size parameter of the Auto Scaling group.

Elastic Load Balancing supports which of the following types of load balancers? (Choose 3 answers) 1. Cross-region 2. Internet-facing 3. Interim 4. Itinerant 5. Internal 6. Hypertext Transfer Protocol Secure (HTTPS) using Secure Sockets Layer (SSL)

2,5,6 <strong>B,E,F.</strong><br>Elastic Load Balancing supports Internet-facing, internal, and HTTPS load balancers.

2. Which of the following are found in an IAM policy? (Choose 2 answers) A. Service Name B. Region C. Action D. Password

2. A, C. IAM policies are independent of region, so no region is specified in the policy. IAM policies are about authorization for an already-authenticated principal, so no password is needed.

2. Which of the following are good use cases for Amazon CloudFront? (Choose 2 answers) A. A popular software download site that supports users around the world, with dynamic content that changes rapidly B. A corporate website that serves training videos to employees. Most employees are located in two corporate campuses in the same city. C. A heavily used video and music streaming service that requires content to be delivered only to paid subscribers D. A corporate HR website that supports a global workforce. Because the site contains sensitive data, all users must connect through a corporate Virtual Private Network (VPN).

2. A, C. The site in A is "popular" and supports "users around the world," key indicators that CloudFront is appropriate. Similarly, the site in C is "heavily used," and requires private content, which is supported by Amazon CloudFront. Both B and D are corporate use cases where the requests come from a single geographic location or appear to come from one (because of the VPN). These use cases will generally not see benefit from Amazon CloudFront.

2. Each AWS region is composed of two or more locations that offer organizations the ability to operate production systems that are more highly available, fault tolerant, and scalable than would be possible using a single data center. What are these locations called? A. Availability Zones B. Replication areas C. Geographic districts D. Compute centers

2. A. An Availability Zone is a distinct location within a region that is insulated from failures in other Availability Zones and provides inexpensive, low-latency network connectivity to other Availability Zones in the same region. Replication areas, geographic districts, and compute centers are not terms used to describe AWS data center locations.

2. When you create a new Amazon Simple Notification Service (Amazon SNS) topic, which of the following is created automatically? A. An Amazon Resource Name (ARN) B. A subscriber C. An Amazon Simple Queue Service (Amazon SQS) queue to deliver your Amazon SNS topic D. A message

2. A. When you create a new Amazon SNS topic, an Amazon ARN is created automatically.

2. Which of the following cache engines are supported by Amazon ElastiCache? (Choose 2 answers) A. MySQL B. Memcached C. Redis D. Couchbase

2. B, C. Amazon ElastiCache supports Memcached and Redis cache engines. MySQL is not a cache engine, and Couchbase is not supported.

2. Which of the following options will help increase the availability of a web server farm? (Choose 2 answers) A. Use Amazon CloudFront to deliver content to the end users with low latency and high data transfer speeds. B. Launch the web server instances across multiple Availability Zones. C. Leverage Auto Scaling to recover from failed instances. D. Deploy the instances in an Amazon Virtual Private Cloud (Amazon VPC). E. Add more CPU and RAM to each instance.

2. B, C. Launching instances across multiple Availability Zones helps ensure the application is isolated from failures in a single Availability Zone, allowing the application to achieve higher availability. Whether you are running one Amazon EC2 instance or thousands, you can use Auto Scaling to detect impaired Amazon EC2 instances and unhealthy applications and replace the instances without your intervention. This ensures that your application is getting the compute capacity that you expect, thereby maintaining your availability.

2. Which of the following are not appropriates use cases for Amazon Simple Storage Service (Amazon S3)? (Choose 2 answers) A. Storing web content B. Storing a file system mounted to an Amazon Elastic Compute Cloud (Amazon EC2) instance C. Storing backups for a relational database D. Primary storage for a database E. Storing logs for analytics

2. B, D. Amazon S3 cannot be mounted to an Amazon EC2 instance like a file system and should not serve as primary database storage.

2. Where do you register a domain name? A. With your local government authority B. With a domain registrar C. With InterNIC directly D. With the Internet Assigned Numbers Authority (IANA)

2. B. Domain names are registered with a domain registrar, which then registers the name to InterNIC.

2. Your order-processing application processes orders extracted from a queue with two Reserved Instances processing 10 orders/minute. If an order fails during processing, then it is returned to the queue without penalty. Due to a weekend sale, the queues have several hundred orders backed up. While the backup is not catastrophic, you would like to drain it so that customers get their confirmation emails faster. What is a cost-effective way to drain the queue for orders? A. Create more queues. B. Deploy additional Spot Instances to assist in processing the orders. C. Deploy additional Reserved Instances to assist in processing the orders. D. Deploy additional On-Demand Instances to assist in processing the orders.

2. B. Spot Instances are a very cost-effective way to address temporary compute needs that are not urgent and are tolerant of interruption. That's exactly the workload described here. Reserved Instances are inappropriate for temporary workloads. On-Demand Instances are good for temporary workloads, but don't offer the cost savings of Spot Instances. Adding more queues is a non-responsive answer as it would not address the problem.

2. You have created an Elastic Load Balancing load balancer listening on port 80, and you registered it with a single Amazon Elastic Compute Cloud (Amazon EC2) instance also listening on port 80. A client makes a request to the load balancer with the correct protocol and port for the load balancer. In this scenario, how many connections does the balancer maintain? A. 1 B. 2 C. 3 D. 4

2. B. The load balancer maintains two separate connections: one connection with the client and one connection with the Amazon EC2 instance.

2. You have launched a Windows Amazon Elastic Compute Cloud (Amazon EC2) instance and specified an Amazon EC2 key pair for the instance at launch. Which of the following accurately describes how to log in to the instance? A. Use the Amazon EC2 key pair to securely connect to the instance via Secure Shell (SSH). B. Use your AWS Identity and Access Management (IAM) user X.509 certificate to log in to the instance. C. Use the Amazon EC2 key pair to decrypt the administrator password and then securely connect to the instance via Remote Desktop Protocol (RDP) as the administrator. D. A key pair is not needed. Securely connect to the instance via RDP.

2. C. The administrator password is encrypted with the public key of the key pair, and you provide the private key to decrypt the password. Then log in to the instance as the administrator with the decrypted password.

2. Which of the following statements is true when it comes to the AWS shared responsibility model? A. The shared responsibility model is limited to security considerations only; it does not extend to IT controls. B. The shared responsibility model is only applicable for customers who want to be compliant with SOC 1 Type II. C. The shared responsibility model is not just limited to security considerations; it also extends to IT controls. D. The shared responsibility model is only applicable for customers who want to be compliant with ISO 27001.

2. C. The shared responsibility model can include IT controls, and it is not just limited to security considerations. Therefore, answer C is correct.

2. You are a solutions architect working for a large travel company that is migrating its existing server estate to AWS. You have recommended that they use a custom Amazon VPC, and they have agreed to proceed. They will need a public subnet for their web servers and a private subnet in which to place their databases. They also require that the web servers and database servers be highly available and that there be a minimum of two web servers and two database servers each. How many subnets should you have to maintain high availability? A. 2 B. 3 C. 4 D. 1

2. C. You need two public subnets (one for each Availability Zone) and two private subnets (one for each Availability Zone). Therefore, you need four subnets.

20. To help prevent data loss due to the failure of any single hardware component, Amazon Elastic Block Storage (Amazon EBS) automatically replicates EBS volume data to which of the following? A. Amazon EBS replicates EBS volume data within the same Availability Zone in a region. B. Amazon EBS replicates EBS volume data across other Availability Zones within the same region. C. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in one other region. D. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in every other region.

20. A. When you create an Amazon EBS volume in an Availability Zone, it is automatically replicated within that Availability Zone to prevent data loss due to failure of any single hardware component. An EBS Snapshot creates a copy of an EBS volume to Amazon S3 so that copies of the volume can reside in different Availability Zones within a region.

20. Which of the following workloads are a good fit for running on Amazon Redshift? (Choose 2 answers) A. Transactional database supporting a busy e-commerce order processing website B. Reporting database supporting back-office analytics C. Data warehouse used to aggregate multiple disparate data sources D. Manage session state and user profile data for thousands of concurrent users

20. B, C. Amazon Redshift is an Online Analytical Processing (OLAP) data warehouse designed for analytics, Extract, Transform, Load (ETL), and high-speed querying. It is not well suited for running transactional applications that require high volumes of small inserts or updates.

20. Auto Scaling supports which of the following plans for Auto Scaling groups? (Choose 3 answers) A. Predictive B. Manual C. Preemptive D. Scheduled E. Dynamic F. End-user request driven G. Optimistic

20. B, D, E. Auto Scaling supports maintaining the current size of an Auto Scaling group using four plans: maintain current levels, manual scaling, scheduled scaling, and dynamic scaling.

20. Which Amazon VPC feature allows you to create a dual-homed instance? A. EIP address B. ENI C. Security groups D. CGW

20. B. Attaching an ENI associated with a different subnet to an instance can make the instance dual-homed.

20. How are you billed for elastic IP addresses? A. Hourly when they are associated with an instance B. Hourly when they are not associated with an instance C. Based on the data that flows through them D. Based on the instance type to which they are attached

20. B. There is a very small hourly charge for allocated elastic IP addresses that are not associated with an instance.

20. Which statements about Amazon Glacier are true? (Choose 3 answers) A. Amazon Glacier stores data in objects that live in archives. B. Amazon Glacier archives are identified by user-specified key names. C. Amazon Glacier archives take three to five hours to restore. D. Amazon Glacier vaults can be locked. E. Amazon Glacier can be used as a standalone service and as an Amazon S3 storage class.

20. C, D, E. Amazon Glacier stores data in archives, which are contained in vaults. Archives are identified by system-created archive IDs, not key names.

20. Can an Amazon Simple Notification Service (Amazon SNS) message be deleted after being published to a topic? A. Only if a subscriber(s) has/have not read the message yet B. Only if the Amazon SNS recall message parameter has been set C. No. After a message has been successfully published to a topic, it cannot be recalled. D. Yes. However it can be deleted only if the subscribers are Amazon SQS queues.

20. C. No. After a message has been successfully published to a topic, it cannot be recalled.

20. Your Amazon Virtual Private Cloud (Amazon VPC) includes multiple private subnets. The instances in these private subnets must access third-party payment Application Program Interfaces (APIs) over the Internet. Which option will provide highly available Internet access to the instances in the private subnets? A. Create an AWS Storage Gateway in each Availability Zone and configure your routing to ensure that resources use the AWS Storage Gateway in the same Availability Zone. B. Create a customer gateway in each Availability Zone and configure your routing to ensure that resources use the customer gateway in the same Availability Zone. C. Create a Network Address Translation (NAT) gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone. D. Create a NAT gateway in one Availability Zone and configure your routing to ensure that resources use that NAT gateway in all the Availability Zones.

20. C. You can use a NAT gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. If you have resources in multiple Availability Zones and they share one NAT gateway, resources in the other Availability Zones lose Internet access in the event that the NAT gateway's Availability Zone is down. To create an Availability Zone independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.

20. All of the website deployments are currently done by your company's development team. With a surge in website popularity, the company is looking for ways to be more agile with deployments. What AWS cloud service can help the developers focus more on writing code instead of spending time managing and configuring servers, databases, load balancers, firewalls, and networks? A. AWS Config B. AWS Trusted Advisor C. Amazon Kinesis D. AWS Elastic Beanstalk

20. D. AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their application code, and the service automatically handles all the details such as resource provisioning, load balancing, Auto Scaling, and monitoring.

20. When configuring Amazon Route 53 as your DNS service for an existing domain, which is the first step that needs to be performed? A. Create hosted zones. B. Create resource record sets. C. Register a domain with Amazon Route 53. D. Transfer domain registration from current registrar to Amazon Route 53.

20. D. You must first transfer the existing domain registration from another registrar to Amazon Route 53 to configure it as your DNS service.

Which of the following are AWS Key Management Service (AWS KMS) keys that will never exit AWS unencrypted? 1. AWS KMS data keys 2. <p class="Option"><span lang="EN-US">Envelope encryption keys<o:p></o:p></span> 3. AWS KMS Customer Master Keys (CMKs) 4. <p class="Option"><span lang="EN-US">A and C<o:p></o:p></span>

3 <p class="Answer"><strong><span lang="EN-US">C.</span></strong><br><p class="Explanation"><span lang="EN-US">AWS KMS CMKs are the fundamental resources that AWS KMS manages. CMKs can never leave AWS KMS unencrypted, but data keys can.</span>

<p class="Question"><span lang="EN-US">Your organization uses Chef heavily for its deployment automation. What AWS Cloud service provides integration with Chef recipes to start new application server instances, configure application server software, and deploy applications?</span> 1. <p class="Option"><span lang="EN-US">AWS Elastic Beanstalk</span> 2. Amazon Kinesis 3. <p class="Option"><span lang="EN-US">AWS OpsWorks</span> 4. AWS CloudFormation

3 <p class="Answer"><strong><span lang="EN-US">C.</span></strong><br><p class="Explanation"><span lang="EN-US">AWS OpsWorks uses Chef recipes to start new app server instances, configure application server software, and deploy applications. Organizations can leverage Chef recipes to automate operations like software configurations, package installations, database setups, server scaling, and code deployment.</span>

Which type of record is commonly used to route traffic to an IPv6 address? 1. An A record 2. A CNAME 3. An AAAA record 4. An MX record

3 <p class="Answer"><strong><span lang="EN-US">C.</span></strong><br><p class="Explanation"><span lang="EN-US">An AAAA record is used to route traffic to an IPv6 address, whereas an A record is used to route traffic to an IPv4 address.</span>

Which resource record set would not be allowed for the hosted zone example.com? 1. <p class="Option"><u><span class="InlineURL"><span lang="EN-US">www.example.com</span></span></u> 2. <p class="Option"><u><span class="InlineURL"><span lang="EN-US">www.aws.example.com</span></span></u> 3. <u>www.example.ca</u> 4. <u>www.beta.example.com</u>

3 <p class="Answer"><strong><span lang="EN-US">C.</span></strong><br><p class="Explanation"><span lang="EN-US">The resource record sets contained in a hosted zone must share the same suffix.</span>

A cell phone company is running dynamic-content television commercials for a contest. They want their website to handle traffic spikes that come after a commercial airs. The website is interactive, offering personalized content to each visitor based on location, purchase history, and the current commercial airing. Which architecture will configure Auto Scaling to scale out to respond to spikes of demand, while minimizing costs during quiet periods? 1. Set the minimum size of the Auto Scaling group so that it can handle high traffic volumes without needing to scale out. 2. Create an Auto Scaling group large enough to handle peak traffic loads, and then stop some instances. Configure Auto Scaling to scale out when traffic increases using the stopped instances, so new capacity will come online quickly. 3. <p class="Option"><span lang="EN-US">Configure Auto Scaling to scale out as traffic increases. Configure the launch configuration to start new instances from a preconfigured Amazon Machine Image (AMI).</span> 4. <p class="Option"><span lang="EN-US">Use Amazon CloudFront and Amazon Simple Storage Service (Amazon S3) to cache changing content, with the Auto Scaling group set as the origin. Configure Auto Scaling to have sufficient instances necessary to initially populate CloudFront and Amazon ElastiCache, and then scale in after the cache is fully populated.<span class="AnswerChar"><o:p></o:p></span></span>

3 <p class="Option"><strong><span class="AnswerChar"><span lang="EN-US">C.</span></span></strong><span class="AnswerChar"><span lang="EN-US"><o:p></o:p></span></span><br><p class="Explanation"><span lang="EN-US">Auto Scaling is designed to scale out based on an event like increased traffic while being cost effective when not needed.</span>

You have a workload that requires 1 TB of durable block storage at 1,500 IOPS during normal use. Every night there is an Extract, Transform, Load (ETL) task that requires 3,000 IOPS for 15 minutes. What is the most appropriate volume type for this workload? 1. Use a Provisioned IOPS SSD volume at 3,000 IOPS. 2. Use an instance store. 3. Use a general-purpose SSD volume. 4. Use a magnetic volume.

3 <strong>C.</strong><br>A short period of heavy traffic is exactly the use case for the bursting nature of general-purpose SSD volumes&mdash;the rest of the day is more than enough time to build up enough IOPS credits to handle the nightly task. Instance stores are not durable, magnetic volumes cannot provide enough IOPS, and to set up a Provisioned IOPS SSD volume to handle the peak would mean spending money for more IOPS than you need.

Your company data center is completely full, but the sales group has determined a need to store 200TB of product video. The videos were created over the last several years, with the most recent being accessed by sales the most often. The data must be accessed locally, but there is no space in the data center to install local storage devices to store this data. What AWS Cloud service will meet sales&rsquo; requirements? 1. <p class="Option"><span lang="EN-US">AWS Storage Gateway Gateway-Stored volumes</span> 2. <p class="Option"><span lang="EN-US">Amazon Elastic Compute Cloud (Amazon EC2) instances with attached Amazon EBS Volumes</span> 3. AWS Storage Gateway Gateway-Cached volumes 4. <p class="Option"><span lang="EN-US">AWS Import/Export Disk</span>

3 <strong>C.</strong><br>AWS Storage Gateway allows you to access data in Amazon S3 locally, with the Gateway-Cached volume configuration allowing you to expand a relatively small amount of local storage into Amazon S3.

When it comes to risk management, which of the following is true? 1. AWS does not develop a strategic business plan; risk management and mitigation is entirely the responsibility of the customer. 2. AWS has developed a strategic business plan to identify any risks and implemented controls to mitigate or manage those risks. Customers do not need to develop and maintain their own risk management plans. 3. AWS has developed a strategic business plan to identify any risks and has implemented controls to mitigate or manage those risks. Customers should also develop and maintain their own risk management plans to ensure they are compliant with any relevant controls and certifications. 4. Neither AWS nor the customer needs to worry about risk management, so no plan is needed from either party.

3 <strong>C.</strong><br>AWS has developed a strategic business plan, and customers should also develop and maintain their own risk management plans, therefore answer C is correct.

Which AWS Cloud service allows organizations to gain system-wide visibility into resource utilization, application performance, and operational health? 1. AWS Identity and Access Management (IAM) 2. Amazon Simple Notification Service (Amazon SNS) 3. Amazon CloudWatch 4. AWS CloudFormation

3 <strong>C.</strong><br>Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications organizations run on AWS. It allows organizations to collect and track metrics, collect and monitor log files, and set alarms. AWS IAM, Amazon SNS, and AWS CloudFormation do not provide visibility into resource utilization, application performance, and the operational health of your AWS resources.

What combination of services enable you to copy daily 50TB of data to Amazon storage, process the data in Hadoop, and store the results in a large data warehouse? 1. Amazon Kinesis, Amazon Data Pipeline, Amazon Elastic MapReduce (Amazon EMR), and Amazon Elastic Compute Cloud (Amazon EC2) 2. Amazon Elastic Block Store (Amazon EBS), Amazon Data Pipeline, Amazon EMR, and Amazon Redshift 3. Amazon Simple Storage Service (Amazon S3), Amazon Data Pipeline, Amazon EMR, and Amazon Redshift 4. Amazon S3, Amazon Simple Workflow, Amazon EMR, and Amazon DynamoDB

3 <strong>C.</strong><br>Amazon Data Pipeline allows you to run regular Extract, Transform, Load (ETL) jobs on Amazon and on-premises data sources. The best storage for large data is Amazon S3, and Amazon Redshift is a large-scale data warehouse service.

Your WordPress website is hosted on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances that leverage Auto Scaling to provide high availability. To ensure that the content of the WordPress site is sustained through scale up and scale down events, you need a common file system that is shared between more than one Amazon EC2 instance. Which AWS Cloud service can meet this requirement? 1. Amazon CloudFront 2. Amazon ElastiCache 3. Amazon Elastic File System (Amazon EFS) 4. Amazon Elastic Beanstalk

3 <strong>C.</strong><br>Amazon EFS is a file storage service for Amazon EC2 instances. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for the content of the WordPress site running on more than one instance.

Which of the following describes how Amazon Elastic MapReduce (Amazon EMR) protects access to the cluster? 1. The master node and the slave nodes are launched into an Amazon Virtual Private Cloud (Amazon VPC). 2. The master node supports a Virtual Private Network (VPN) connection from the key specified at cluster launch. 3. The master node is launched into a security group that allows Secure Shell (SSH) and service access, while the slave nodes are launched into a separate security group that only permits communication with the master node. 4. The master node and slave nodes are launched into a security group that allows SSH and service access.

3 <strong>C.</strong><br>Amazon EMR starts your instances in two Amazon Elastic Compute Cloud (Amazon EC2) security groups, one for the master and another for the slaves. The master security group has a port open for communication with the service. It also has the SSH port open to allow you to securely connect to the instances via SSH using the key specified at startup. The slaves start in a separate security group, which only allows interaction with the master instance. By default, both security groups are set up to prevent access from external sources, including Amazon EC2 instances belonging to other customers. Because these are security groups in your account, you can reconfigure them using the standard Amazon EC2 tools or dashboard.

You are working on a mobile gaming application and are building the leaderboard feature to track the top scores across millions of users. Which AWS services are best suited for this use case? 1. Amazon Redshift 2. Amazon ElastiCache using Memcached 3. Amazon ElastiCache using Redis 4. Amazon Simple Storage Service (S3)

3 <strong>C.</strong><br>Amazon ElastiCache with Redis provides native functions that simplify the development of leaderboards. With Memcached, it is more difficult to sort and rank large datasets. Amazon Redshift and Amazon S3 are not designed for high volumes of small reads and writes, typical of a mobile game.

Your e-commerce application provides daily and <em>ad hoc</em> reporting to various business units on customer purchases. This is resulting in an extremely high level of read traffic to your MySQL Amazon Relational Database Service (Amazon RDS) instance. What can you do to scale up read traffic without impacting your database&rsquo;s performance? 1. Increase the allocated storage for the Amazon RDS instance. 2. Modify the Amazon RDS instance to be a Multi-AZ deployment. 3. Create a read replica for an Amazon RDS instance. 4. Change the Amazon RDS instance DB engine version.

3 <strong>C.</strong><br>Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS instances. This replication feature makes it easy to scale out elastically beyond the capacity constraints of a single Amazon RDS instance for read-heavy database workloads. You can create one or more replicas of a given source Amazon RDS instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

Which is a function that Amazon Route 53 does not perform? 1. Domain registration 2. DNS service 3. <p class="Option"><span lang="EN-US">Load balancing</span> 4. Health checks

3 <strong>C.</strong><br>Amazon Route 53 performs three main functions: domain registration, DNS service, and health checking.

To have a record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from where, you should do what? 1. Enable versioning on the bucket. 2. Enable website hosting on the bucket. 3. Enable server access logs on the bucket. 4. Create an AWS Identity and Access Management (IAM) bucket policy. 5. Enable Amazon CloudWatch logs.

3 <strong>C.</strong><br>Amazon S3 server access logs store a record of what requestor accessed the objects in your bucket, including the requesting IP address.

What AWS Cloud service provides a logically isolated section of the AWS Cloud where organizations can launch AWS resources in a virtual network that they define? 1. Amazon Simple Workflow Service (Amazon SWF) 2. Amazon Route 53 3. Amazon Virtual Private Cloud (Amazon VPC) 4. AWS CloudFormation

3 <strong>C.</strong><br>Amazon VPC lets organizations provision a logically isolated section of the AWS Cloud where they can launch AWS resources in a virtual network that they define. Amazon SWF, Amazon Route 53, and AWS CloudFormation do not provide a virtual network. Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. Amazon Route 53 provides a highly available and scalable cloud Domain Name System (DNS) web service. Amazon CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.

Which technology does Amazon WorkSpaces use to provide data security? 1. Secure Sockets Layer (SSL)/Transport Layer Security (TLS) 2. Advanced Encryption Standard (AES)-256 3. PC-over-IP (PCoIP) 4. AES-128

3 <strong>C.</strong><br>Amazon WorkSpaces uses PCoIP, which provides an interactive video stream without transmitting actual data.

Your company collects information from the point of sale registers at all of its franchise locations. Each month these processes collect 200TB of information stored in Amazon Simple Storage Service (Amazon S3). Analytics jobs taking 24 hours are performed to gather knowledge from this data. Which of the following will allow you to perform these analytics in a cost-effective way? 1. Copy the data to a persistent Amazon Elastic MapReduce (Amazon EMR) cluster, and run the MapReduce jobs. 2. Create an application that reads the information of the Amazon S3 bucket and runs it through an Amazon Kinesis stream. 3. Run a transient Amazon EMR cluster, and run the MapReduce jobs against the data directly in Amazon S3. 4. Launch a d2.8xlarge (32 vCPU, 244GB RAM) Amazon Elastic Compute Cloud (Amazon EC2) instance, and run an application to read and process each object sequentially.

3 <strong>C.</strong><br>Because the job is run monthly, a persistent cluster will incur unnecessary compute costs during the rest of the month. Amazon Kinesis is not appropriate because the company is running analytics as a batch job and not on a stream. A single large instance does not scale out to accommodate the large compute needs.

A Database security group controls network access to a database instance that is inside a Virtual Private Cloud (VPC) and by default allows access from? 1. Access from any IP address for the standard ports that the database uses is provided by default. 2. Access from any IP address for any port is provided by default in the DB security group. 3. No access is provided by default, and any access must be explicitly added with a rule to the DB security group. 4. Access for the database connection string is provided by default in the DB security group.

3 <strong>C.</strong><br>By default, network access is turned off to a DB Instance. You can specify rules in a security group that allows access from an IP address range, port, or Amazon Elastic Compute Cloud (Amazon EC2) security group.

Which of the following is NOT a recommended approach for customers trying to achieve strong compliance and governance over an entire IT control environment? 1. Take a holistic approach: Review information available from AWS together with all other information, and document all compliance requirements. 2. Verify that all control objectives are met and all key controls are designed and operating effectively. 3. Implement generic control objectives that are not specifically designed to meet their organization&rsquo;s compliance requirements. 4. Identify and document controls owned by all third parties.

3 <strong>C.</strong><br>Customers should ensure that they implement control objectives that are designed to meet their organization&rsquo;s own unique compliance requirements, therefore answer C is correct.

Which Amazon Elastic Compute Cloud (Amazon EC2) feature ensures that your instances will not share a physical host with instances from any other AWS customer? 1. Amazon Virtual Private Cloud (VPC) 2. Placement groups 3. Dedicated Instances 4. Reserved Instances

3 <strong>C.</strong><br>Dedicated Instances will not share hosts with other accounts.

As a Solutions Architect, how should you architect systems on AWS? 1. You should architect for least cost. 2. You should architect your AWS usage to take advantage of Amazon Simple Storage Service&rsquo;s (Amazon S3) durability. 3. You should architect your AWS usage to take advantage of multiple regions and Availability Zones. 4. You should architect with Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling to ensure capacity is available when needed.

3 <strong>C.</strong><br>Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.

How many access keys may an AWS Identity and Access Management (IAM) user have active at one time? 1. 0 2. 1 3. 2 4. 3

3 <strong>C.</strong><br>IAM permits users to have no more than two active access keys at one time.

Your company requires that all data sent to external storage be encrypted before being sent. Which Amazon Simple Storage Service (Amazon S3) encryption solution will meet this requirement? 1. Server-Side Encryption (SSE) with AWS-managed keys (SSE-S3) 2. SSE with customer-provided keys (SSE-C) 3. Client-side encryption with customer-managed keys 4. Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)

3 <strong>C.</strong><br>If data must be encrypted before being sent to Amazon S3, client-side encryption must be used.

Based on the following Amazon Simple Storage Service (Amazon S3) URL, which one of the following statements is correct?<br /><br />https://bucket1.abc.com.s3.amazonaws.com/folderx/myfile.doc<br> <br>(NOTE: This link is only an example URL for this question, and is not intended to be a real or live link.) 1. The object "myfile.doc" is stored in the folder "folderx" in the bucket "bucket1.abc.com." 2. The object "myfile.doc" is stored in the bucket "bucket1.abc.com." 3. The object "folderx/myfile.doc" is stored in the bucket "bucket1.abc.com." 4. The object "myfile.doc" is stored in the bucket "bucket1."

3 <strong>C.</strong><br>In a URL, the bucket name precedes the string "s3.amazonaws.com/," and the object key is everything after that. There is no folder structure in Amazon S3.

You are a solutions architect working for a media company that hosts its website on AWS. Currently, there is a single Amazon Elastic Compute Cloud (Amazon EC2) Instance on AWS with MySQL installed locally to that Amazon EC2 Instance. You have been asked to make the company&rsquo;s production environment more resilient and to increase performance. You suggest that the company split out the MySQL database onto an Amazon RDS Instance with Multi-AZ enabled. This addresses the company&rsquo;s increased resiliency requirements. Now you need to suggest how you can increase performance. Ninety-nine percent of the company&rsquo;s end users are magazine subscribers who will be reading additional articles on the website, so only one percent of end users will need to write data to the site. What should you suggest to increase performance? 1. Alter the connection string so that if a user is going to write data, it is written to the secondary copy of the Multi-AZ database. 2. Alter the connection string so that if a user is going to write data, it is written to the primary copy of the Multi-AZ database. 3. Recommend that the company use read replicas, and distribute the traffic across multiple read replicas. 4. Migrate the MySQL database to Amazon Redshift to take advantage of columnar storage and maximize performance.

3 <strong>C.</strong><br>In this scenario, the best idea is to use read replicas to scale out the database and thus maximize read performance. When using Multi-AZ, the secondary database is not accessible and all reads and writes must go to the primary or any read replicas.

Your company stores documents in Amazon Simple Storage Service (Amazon S3), but it wants to minimize cost. Most documents are used actively for only about a month, then much less frequently. However, all data needs to be available within minutes when requested. How can you meet these requirements? 1. Migrate the data to Amazon S3 Reduced Redundancy Storage (RRS) after 30 days. 2. Migrate the data to Amazon Glacier after 30 days. 3. Migrate the data to Amazon S3 Standard &ndash; Infrequent Access (IA) after 30 days. 4. Turn on versioning, then migrate the older version to Amazon Glacier.

3 <strong>C.</strong><br>Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is correct. Amazon S3 RRS should only be used for easily replicated data, not critical data. Migration to Amazon Glacier might minimize storage costs if retrievals are infrequent, but documents would not be available in minutes when needed.

Can an Amazon Simple Notification Service (Amazon SNS) message be deleted after being published to a topic? 1. Only if a subscriber(s) has/have not read the message yet 2. Only if the Amazon SNS recall message parameter has been set 3. No. After a message has been successfully published to a topic, it cannot be recalled. 4. Yes. However it can be deleted only if the subscribers are Amazon SQS queues.

3 <strong>C.</strong><br>No. After a message has been successfully published to a topic, it cannot be recalled.

You are building a photo management application that maintains metadata on millions of images in an Amazon DynamoDB table. When a photo is retrieved, you want to display the metadata next to the image. Which Amazon DynamoDB operation will you use to retrieve the metadata attributes from the table? 1. Scan operation 2. Search operation 3. Query operation 4. Find operation

3 <strong>C.</strong><br>Query is the most efficient operation to find a single item in a large table.

Your web application needs four instances to support steady traffic nearly all of the time. On the last day of each month, the traffic triples. What is a cost-effective way to handle this traffic pattern? 1. Run 12 Reserved Instances all of the time. 2. Run four On-Demand Instances constantly, then add eight more On-Demand Instances on the last day of each month. 3. Run four Reserved Instances constantly, then add eight On-Demand Instances on the last day of each month. 4. Run four On-Demand Instances constantly, then add eight Reserved Instances on the last day of each month.

3 <strong>C.</strong><br>Reserved Instances provide cost savings when you can commit to running instances full time, such as to handle the base traffic. On-Demand Instances provide the flexibility to handle traffic spikes, such as on the last day of the month.

You have launched a Windows Amazon Elastic Compute Cloud (Amazon EC2) instance and specified an Amazon EC2 key pair for the instance at launch. Which of the following accurately describes how to log in to the instance? 1. Use the Amazon EC2 key pair to securely connect to the instance via Secure Shell (SSH). 2. Use your AWS Identity and Access Management (IAM) user X.509 certificate to log in to the instance. 3. Use the Amazon EC2 key pair to decrypt the administrator password and then securely connect to the instance via Remote Desktop Protocol (RDP) as the administrator. 4. A key pair is not needed. Securely connect to the instance via RDP.

3 <strong>C.</strong><br>The administrator password is encrypted with the public key of the key pair, and you provide the private key to decrypt the password. Then log in to the instance as the administrator with the decrypted password.

How many nodes can you add to an Amazon ElastiCache cluster running Memcached? 1. 1 2. 5 3. 20 4. 100

3 <strong>C.</strong><br>The default limit is 20 nodes per cluster.

What should you do in order to grant a different AWS account permission to your Amazon Simple Queue Service (Amazon SQS) queue? 1. Share credentials to your AWS account and have the other account&rsquo;s applications use your account&rsquo;s credentials to access the Amazon SQS queue. 2. Create a user for that account in AWS Identity and Access Management (IAM) and establish an IAM policy that grants access to the queue. 3. Create an Amazon SQS policy that grants the other account access. 4. Amazon Virtual Private Cloud (Amazon VPC) peering must be used to achieve this.

3 <strong>C.</strong><br>The main difference between Amazon SQS policies and IAM policies is that an Amazon SQS policy enables you to grant a different AWS account permission to your Amazon SQS queues, but an IAM policy does not.

What is the minimum size subnet that you can have in an Amazon VPC? 1. /24 2. /26 3. /28 4. /30

3 <strong>C.</strong><br>The minimum size subnet that you can have in an Amazon VPC is /28.

How can you connect to a new Linux instance using SSH? 1. Decrypt the root password. 2. Using a certificate 3. Using the private half of the instance's key pair 4. Using Multi-Factor Authentication (MFA)

3 <strong>C.</strong><br>The public half of the key pair is stored on the instance, and the private half can then be used to connect via SSH.

Which of the following statements is true when it comes to the AWS shared responsibility model? 1. The shared responsibility model is limited to security considerations only; it does not extend to IT controls. 2. The shared responsibility model is only applicable for customers who want to be compliant with SOC 1 Type II. 3. The shared responsibility model is not just limited to security considerations; it also extends to IT controls. 4. The shared responsibility model is only applicable for customers who want to be compliant with ISO 27001.

3 <strong>C.</strong><br>The shared responsibility model can include IT controls, and it is not just limited to security considerations. Therefore, answer C is correct.

Can an Amazon Simple Notification Service (Amazon SNS) topic be recreated with a previously used topic name? 1. Yes. The topic name should typically be available after 24 hours after the previous topic with the same name has been deleted. 2. Yes. The topic name should typically be available after 1&ndash;3 hours after the previous topic with the same name has been deleted. 3. Yes. The topic name should typically be available after 30&ndash;60 seconds after the previous topic with the same name has been deleted. 4. At this time, this feature is not supported.

3 <strong>C.</strong><br>Topic names should typically be available for reuse approximately 30&ndash;60 seconds after the previous topic with the same name has been deleted. The exact time will depend on the number of subscriptions active on the topic; topics with a few subscribers will be available instantly for reuse, while topics with larger subscriber lists may take longer.

Your web application front end consists of multiple Amazon Compute Cloud (Amazon EC2) instances behind an Elastic Load Balancing load balancer. You have configured the load balancer to perform health checks on these Amazon EC2 instances. If an instance fails to pass health checks, which statement will be true? 1. The instance is replaced automatically by the load balancer. 2. The instance is terminated automatically by the load balancer. 3. The load balancer stops sending traffic to the instance that failed its health check. 4. The instance is quarantined by the load balancer for root cause analysis.

3 <strong>C.</strong><br>When Amazon EC2 instances fail the requisite number of consecutive health checks, the load balancer stops sending traffic to the Amazon EC2 instance.

You create a new VPC in US-East-1 and provision three subnets inside this Amazon VPC. Which of the following statements is true? 1. By default, these subnets will not be able to communicate with each other; you will need to create routes. 2. All subnets are public by default. 3. All subnets will be able to communicate with each other by default. 4. Each subnet will have identical CIDR blocks.

3 <strong>C.</strong><br>When you provision an Amazon VPC, all subnets can communicate with each other by default.

Your Amazon Virtual Private Cloud (Amazon VPC) includes multiple private subnets. The instances in these private subnets must access third-party payment Application Program Interfaces (APIs) over the Internet. Which option will provide highly available Internet access to the instances in the private subnets? 1. Create an AWS Storage Gateway in each Availability Zone and configure your routing to ensure that resources use the AWS Storage Gateway in the same Availability Zone. 2. Create a customer gateway in each Availability Zone and configure your routing to ensure that resources use the customer gateway in the same Availability Zone. 3. Create a Network Address Translation (NAT) gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone. 4. Create a NAT gateway in one Availability Zone and configure your routing to ensure that resources use that NAT gateway in all the Availability Zones.

3 <strong>C.</strong><br>You can use a NAT gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. If you have resources in multiple Availability Zones and they share one NAT gateway, resources in the other Availability Zones lose Internet access in the event that the NAT gateway&rsquo;s Availability Zone is down. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.

Which of the following must be configured on an Elastic Load Balancing load balancer to accept incoming traffic? 1. A port 2. A network interface 3. A listener 4. An instance

3 <strong>C.</strong><br>You configure your load balancer to accept incoming traffic by specifying one or more listeners.

You are a solutions architect working for a large travel company that is migrating its existing server estate to AWS. You have recommended that they use a custom Amazon VPC, and they have agreed to proceed. They will need a public subnet for their web servers and a private subnet in which to place their databases. They also require that the web servers and database servers be highly available and that there be a minimum of two web servers and two database servers each. How many subnets should you have to maintain high availability? 1. 2 2. 3 3. 4 4. 1

3 <strong>C.</strong><br>You need two public subnets (one for each Availability Zone) and two private subnets (one for each Availability Zone). Therefore, you need four subnets.

You have created a custom Amazon VPC with both private and public subnets. You have created a NAT instance and deployed this instance to a public subnet. You have attached an EIP address and added your NAT to the route table. Unfortunately, instances in your private subnet still cannot access the Internet. What may be the cause of this? 1. Your NAT is in a public subnet, but it needs to be in a private subnet. 2. Your NAT should be behind an Elastic Load Balancer. 3. You should disable source/destination checks on the NAT. 4. Your NAT has been deployed on a Windows instance, but your other instances are Linux. You should redeploy the NAT onto a Linux instance.

3 <strong>C.</strong><br>You should disable source/destination checks on the NAT.

You have an application that for legal reasons must be hosted in the United States when U.S. citizens access it. The application must be hosted in the European Union when citizens of the EU access it. For all other citizens of the world, the application must be hosted in Sydney. Which routing policy should you choose in order to achieve this? 1. Latency-based routing 2. Simple routing 3. Geolocation routing 4. Failover routing

3 <strong>C.</strong><br>You should route your traffic based on where your end users are located. The best routing policy to achieve this is geolocation routing.

Which DNS record should you use to configure the transmission of email to your intended mail server? 1. SPF records 2. <span style="font-size: 13px;">A records</span> 3. MX records 4. <p class="Option"><span lang="EN-US">SOA record</span>

3 <strong>C.</strong><br>You would use Mail eXchange (MX) records to define which inbound destination mail server should be used.

When using Amazon Relational Database Service (Amazon RDS) Multi-AZ, how can you offload read requests from the primary? (Choose 2 answers) 1. Configure the connection string of the clients to connect to the secondary node and perform reads while the primary is used for writes. 2. Amazon RDS automatically sends writes to the primary and sends reads to the secondary. 3. Add a read replica DB instance, and configure the client&rsquo;s application logic to use a read-replica. 4. Create a caching environment using ElastiCache to cache frequently used data. Update the application logic to read/write from the cache.

3,4 <strong>C,D.</strong><br>Amazon RDS allows for the creation of one or more read-replicas for many engines that can be used to handle reads. Another common pattern is to create a cache using Memcached and Amazon ElastiCache to store frequently used queries. The secondary slave DB Instance is not accessible and cannot be used to offload queries.

Which of the following must be specified when launching a new Amazon Elastic Compute Cloud (Amazon EC2) Windows instance? (Choose 2 answers) 1. The Amazon EC2 instance ID 2. Password for the administrator account 3. Amazon EC2 instance type 4. Amazon Machine Image (AMI)

3,4 <strong>C,D.</strong><br>The Amazon EC2 instance ID will be assigned by AWS as part of the launch process. The administrator password is assigned by AWS and encrypted via the public key. The instance type defines the virtual hardware and the AMI defines the initial software state. You must specify both upon launch.

Which statements about Amazon Glacier are true? (Choose 3 answers) 1. Amazon Glacier stores data in objects that live in archives. 2. Amazon Glacier archives are identified by user-specified key names. 3. Amazon Glacier archives take three to five hours to restore. 4. Amazon Glacier vaults can be locked. 5. Amazon Glacier can be used as a standalone service and as an Amazon S3 storage class.

3,4,5 <strong>C,D,E.</strong><br>Amazon Glacier stores data in archives, which are contained in vaults. Archives are identified by system-created archive IDs, not key names.

You have a web application that contains both static content in an Amazon Simple Storage Service (Amazon S3) bucket&mdash;primarily images and CSS files&mdash;and also dynamic content currently served by a PHP web app running on Amazon Elastic Compute Cloud (Amazon EC2). What features of Amazon CloudFront can be used to support this application with a single Amazon CloudFront distribution? (Choose 2 answers) 1. <p class="Option"><span lang="EN-US">Multiple Origin Access Identifiers</span> 2. <p class="Option"><span lang="EN-US">Multiple signed URLs</span> 3. <p class="Option"><span lang="EN-US">Multiple origins</span> 4. <p class="Option"><span lang="EN-US">Multiple edge locations</span> 5. Multiple cache behaviors

3,5 <p class="Answer"><strong><span lang="EN-US">C, E.</span></strong><br><p class="Explanation"><span lang="EN-US">Using multiple origins and setting multiple cache behaviors allow you to serve static and dynamic content from the same distribution. Origin Access Identifiers and signed URLs support serving private content from Amazon CloudFront, while multiple edge locations are simply how Amazon CloudFront serves any content.</span>

Your application stores critical data in Amazon Simple Storage Service (Amazon S3), which must be protected against inadvertent or intentional deletion. How can this data be protected? (Choose 2 answers) 1. Use cross-region replication to copy data to another bucket automatically. 2. Set a vault lock. 3. Enable versioning on the bucket. 4. Use a lifecycle policy to migrate data to Amazon Glacier. 5. Enable MFA Delete on the bucket.

3,5 <strong>C,E.</strong><br>Versioning protects data against inadvertent or intentional deletion by storing all versions of the object, and MFA Delete requires a one-time code from a Multi-Factor Authentication (MFA) device to delete objects. Cross-region replication and migration to the Amazon Glacier storage class do not protect against deletion. Vault locks are a feature of Amazon Glacier, not a feature of Amazon S3.

3. Your AWS account administrator left your company today. The administrator had access to the root user and a personal IAM administrator account. With these accounts, he generated other IAM accounts and keys. Which of the following should you do today to protect your AWS infrastructure? (Choose 4 answers) A. Change the password and add MFA to the root user. B. Put an IP restriction on the root user. C. Rotate keys and change passwords for IAM accounts. D. Delete all IAM accounts. E. Delete the administrator's personal IAM account. F. Relaunch all Amazon EC2 instances with new roles.

3. A, B, C, E. Locking down your root user and all accounts to which the administrator had access is the key here. Deleting all IAM accounts is not necessary, and it would cause great disruption to your operations. Amazon EC2 roles use temporary security tokens, so relaunching Amazon EC2 instances is not necessary.

3. What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)? (Choose 3 answers) A. All objects have a URL. B. Amazon S3 can store unlimited amounts of data. C. Objects are world-readable by default. D. Amazon S3 uses a REST (Representational State Transfer) Application Program Interface (API). E. You must pre-allocate the storage in a bucket.

3. A, B, D. C and E are incorrect—objects are private by default, and storage in a bucket does not need to be pre-allocated.

3. Which of the following are features of Amazon Simple Notification Service (Amazon SNS)? (Choose 3 answers) A. Publishers B. Readers C. Subscribers D. Topic

3. A, C, D. Publishers, subscribers, and topics are the correct answers. You have subscribers to an Amazon SNS topic, not readers.

3. Which of the following AWS Cloud services are designed according to the Multi-AZ principle? (Choose 2 answers) A. Amazon DynamoDB B. Amazon ElastiCache C. Elastic Load Balancing D. Amazon Virtual Private Cloud (Amazon VPC) E. Amazon Simple Storage Service (Amazon S3)

3. A, E. Amazon DynamoDB runs across AWS proven, high-availability data centers. The service replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Your data is redundantly stored across multiple facilities and multiple devices in each facility. While Elastic Load Balancing and Amazon ElastiCache can be deployed across multiple Availability Zones, you must explicitly take such steps when creating them.

3. AWS provides IT control information to customers in which of the following ways? A. By using specific control definitions or through general control standard compliance B. By using specific control definitions or through SAS 70 C. By using general control standard compliance and by complying with ISO 27001 D. By complying with ISO 27001 and SOC 1 Type II

3. A. AWS provides IT control information to customers through either specific control definitions or general control standard compliance.

3. Which of the following is an optional security control that can be applied at the subnet layer of a VPC? A. Network ACL B. Security Group C. Firewall D. Web application firewall

3. A. Network ACLs are associated to a VPC subnet to control traffic flow.

3. What is the deployment term for an environment that extends an existing on-premises infrastructure into the cloud to connect cloud resources to internal systems? A. All-in deployment B. Hybrid deployment C. On-premises deployment D. Scatter deployment

3. B. A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. An all-in deployment refers to an environment that exclusively runs in the cloud. An on-premises deployment refers to an environment that runs exclusively in an organization's data center.

3. Which of the following must be specified when launching a new Amazon Elastic Compute Cloud (Amazon EC2) Windows instance? (Choose 2 answers) A. The Amazon EC2 instance ID B. Password for the administrator account C. Amazon EC2 instance type D. Amazon Machine Image (AMI)

3. C, D. The Amazon EC2 instance ID will be assigned by AWS as part of the launch process. The administrator password is assigned by AWS and encrypted via the public key. The instance type defines the virtual hardware and the AMI defines the initial software state. You must specify both upon launch.

3. You have a web application that contains both static content in an Amazon Simple Storage Service (Amazon S3) bucket—primarily images and CSS files—and also dynamic content currently served by a PHP web app running on Amazon Elastic Compute Cloud (Amazon EC2). What features of Amazon CloudFront can be used to support this application with a single Amazon CloudFront distribution? (Choose 2 answers) A. Multiple Origin Access Identifiers B. Multiple signed URLs C. Multiple origins D. Multiple edge locations E. Multiple cache behaviors

3. C, E. Using multiple origins and setting multiple cache behaviors allow you to serve static and dynamic content from the same distribution. Origin Access Identifiers and signed URLs support serving private content from Amazon CloudFront, while multiple edge locations are simply how Amazon CloudFront serves any content.

3. A Database security group controls network access to a database instance that is inside a Virtual Private Cloud (VPC) and by default allows access from? A. Access from any IP address for the standard ports that the database uses is provided by default. B. Access from any IP address for any port is provided by default in the DB security group. C. No access is provided by default, and any access must be explicitly added with a rule to the DB security group. D. Access for the database connection string is provided by default in the DB security group.

3. C. By default, network access is turned off to a DB Instance. You can specify rules in a security group that allows access from an IP address range, port, or Amazon Elastic Compute Cloud (Amazon EC2) security group.

3. You are a solutions architect working for a media company that hosts its website on AWS. Currently, there is a single Amazon Elastic Compute Cloud (Amazon EC2) Instance on AWS with MySQL installed locally to that Amazon EC2 Instance. You have been asked to make the company's production environment more resilient and to increase performance. You suggest that the company split out the MySQL database onto an Amazon RDS Instance with Multi-AZ enabled. This addresses the company's increased resiliency requirements. Now you need to suggest how you can increase performance. Ninety-nine percent of the company's end users are magazine subscribers who will be reading additional articles on the website, so only one percent of end users will need to write data to the site. What should you suggest to increase performance? A. Alter the connection string so that if a user is going to write data, it is written to the secondary copy of the Multi-AZ database. B. Alter the connection string so that if a user is going to write data, it is written to the primary copy of the Multi-AZ database. C. Recommend that the company use read replicas, and distribute the traffic across multiple read replicas. D. Migrate the MySQL database to Amazon Redshift to take advantage of columnar storage and maximize performance.

3. C. In this scenario, the best idea is to use read replicas to scale out the database and thus maximize read performance. When using Multi-AZ, the secondary database is not accessible and all reads and writes must go to the primary or any read replicas.

3. You have an application that for legal reasons must be hosted in the United States when U.S. citizens access it. The application must be hosted in the European Union when citizens of the EU access it. For all other citizens of the world, the application must be hosted in Sydney. Which routing policy should you choose in order to achieve this? A. Latency-based routing B. Simple routing C. Geolocation routing D. Failover routing

3. C. You should route your traffic based on where your end users are located. The best routing policy to achieve this is geolocation routing.

<p class="Question"><span lang="EN-US">Which cryptographic method is used by AWS Key Management Service (AWS KMS) to encrypt data?<o:p></o:p></span> 1. <p class="Option"><span lang="EN-US">Password-based encryption<o:p></o:p></span> 2. Asymmetric 3. Shared secret 4. Envelope encryption

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">AWS KMS uses envelope encryption to protect data. AWS KMS creates a data key, encrypts it under a Customer Master Key (CMK), and returns plaintext and encrypted versions of the data key to you. You use the plaintext key to encrypt data and store the encrypted key alongside the encrypted data. You can retrieve a plaintext data key only if you have the encrypted data key and you have permission to use the corresponding master key.</span>

<p class="Question"><span lang="EN-US">Amazon Route 53 cannot route queries to which AWS resource?</span> 1. <p class="Option"><span lang="EN-US">Amazon CloudFront distribution</span> 2. Elastic Load Balancing load balancer 3. Amazon EC2 4. AWS OpsWorks

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">Amazon Route 53 can route queries to a variety of AWS resources such as an Amazon CloudFront distribution, an Elastic Load Balancing load balancer, an Amazon EC2 instance, a website hosted in an Amazon S3 bucket, and an Amazon Relational Database (Amazon RDS).</span>

How does Amazon Simple Queue Service (Amazon SQS) deliver messages? 1. Last In, First Out (LIFO) 2. First In, First Out (FIFO) 3. Sequentially 4. Amazon SQS doesn&rsquo;t guarantee delivery of your messages in any particular order.

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">Amazon SQS does not guarantee in what order your messages will be delivered.</span>

Which protocol is primarily used by DNS to serve requests? 1. Transmission Control Protocol (TCP) 2. Hyper Text Transfer Protocol (HTTP) 3. File Transfer Protocol (FTP) 4. User Datagram Protocol (UDP)

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">DNS primarily uses UDP to serve requests.</span>

Your company has its primary production site in Western Europe and its DR site in the Asia Pacific. You need to configure DNS so that if your primary site becomes unavailable, you can fail DNS over to the secondary site. Which DNS routing policy would best achieve this? 1. <p class="Option"><span lang="EN-US">Weighted routing</span> 2. Geolocation routing 3. Simple routing 4. <p class="Option"><span lang="EN-US">Failover routing</span>

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">Failover-based routing would best achieve this objective.</span>

What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) visibility timeout? 1. 30 seconds 2. 60 seconds 3. 1 hour 4. 12 hours

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">The maximum time for an Amazon SQS visibility timeout is 12 hours.</span>

<p class="Question"><span lang="EN-US">When configuring Amazon Route 53 as your DNS service for an existing domain, which is the first step that needs to be performed?</span> 1. Create hosted zones. 2. <p class="Option"><span lang="EN-US">Create resource record sets.</span> 3. <p class="Option"><span lang="EN-US">Register a domain with Amazon Route 53.</span> 4. Transfer domain registration from current registrar to Amazon Route 53.

4 <p class="Answer"><strong><span lang="EN-US">D.</span></strong><br><p class="Explanation"><span lang="EN-US">You must first transfer the existing domain registration from another registrar to Amazon Route 53 to configure it as your DNS service.</span>

Which of the following is the Amazon side of an Amazon VPN connection? 1. An EIP 2. A CGW 3. An IGW 4. A VPG

4 <strong>D.</strong><br>A CGW is the customer side of a VPN connection, and an IGW connects a network to the Internet. A VPG is the Amazon side of a VPN connection.

Which type of DNS record should you use to resolve an IP address to a domain name? 1. <p class="Option"><span lang="EN-US">An A record</span> 2. A C Name 3. An SPF record 4. A PTR record

4 <strong>D.</strong><br>A PTR record is used to resolve an IP address to a domain name, and it is commonly referred to as &ldquo;reverse DNS.&rdquo;

Which of the following describes a physical location around the world where AWS clusters data centers? 1. Endpoint 2. Collection 3. Fleet 4. Region

4 <strong>D.</strong><br>A region is a named set of AWS resources in the same geographical area. A region comprises at least two availability zones. Endpoint, Collection, and Fleet do not describe a physical location around the world where AWS clusters data centers.

Why is the launch configuration referenced by the Auto Scaling group instead of being part of the Auto Scaling group? 1. It allows you to change the Amazon Elastic Compute Cloud (Amazon EC2) instance type and Amazon Machine Image (AMI) without disrupting the Auto Scaling group. 2. It facilitates rolling out a patch to an existing set of instances managed by an Auto Scaling group. 3. It allows you to change security groups associated with the instances launched without having to make changes to the Auto Scaling group. 4. All of the above 5. None of the above

4 <strong>D.</strong><br>A, B, and C are all true statements about launch configurations being loosely coupled and referenced by the Auto Scaling group instead of being part of the Auto Scaling group.

All of the website deployments are currently done by your company&rsquo;s development team. With a surge in website popularity, the company is looking for ways to be more agile with deployments. What AWS Cloud service can help the developers focus more on writing code instead of spending time managing and configuring servers, databases, load balancers, firewalls, and networks? 1. AWS Config 2. <p class="Option"><span lang="EN-US">AWS Trusted Advisor</span> 3. Amazon Kinesis 4. <p class="Option"><span lang="EN-US">AWS Elastic Beanstalk</span>

4 <strong>D.</strong><br>AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their application code, and the service automatically handles all the details such as resource provisioning, load balancing, Auto Scaling, and monitoring.

Your company provides an online photo sharing service. The development team is looking for ways to deliver image files with the lowest latency to end users so the website content is delivered with the best possible performance. What service can help speed up distribution of these image files to end users around the world? 1. Amazon Elastic Compute Cloud (Amazon EC2) 2. Amazon Route 53 3. AWS Storage Gateway 4. Amazon CloudFront

4 <strong>D.</strong><br>Amazon CloudFront is a web service that provides a CDN to speed up distribution of your static and dynamic web content—for example, .html, .css, .php, image, and media files—to end users. Amazon CloudFront delivers content through a worldwide network of edge locations. Amazon EC2, Amazon Route 53, and AWS Storage Gateway do not provide CDN services that are required to meet the needs for the photo sharing service.

How long does Amazon CloudWatch keep metric data? 1. 1 day 2. 2 days 3. 1 week 4. 2 weeks

4 <strong>D.</strong><br>Amazon CloudWatch metric data is kept for 2 weeks.

In the basic monitoring package for Amazon Elastic Compute Cloud (Amazon EC2), what Amazon CloudWatch metrics are available? 1. Web server visible metrics such as number of failed transaction requests 2. Operating system visible metrics such as memory utilization 3. Database visible metrics such as number of connections 4. Hypervisor visible metrics such as CPU utilization

4 <strong>D.</strong><br>Amazon CloudWatch metrics provide hypervisor visible metrics.

Which of the following public identity providers are supported by Amazon Cognito Identity? 1. Amazon 2. Google 3. Facebook 4. All of the above

4 <strong>D.</strong><br>Amazon Cognito Identity supports public identity providers—Amazon, Facebook, and Google—as well as unauthenticated identities.

Which AWS database service is best suited for non-relational databases? 1. Amazon Redshift 2. Amazon Relational Database Service (Amazon RDS) 3. Amazon Glacier 4. Amazon DynamoDB

4 <strong>D.</strong><br>Amazon DynamoDB is best suited for non-relational databases. Amazon RDS and Amazon Redshift are both structured relational databases.

Which of the following Amazon VPC resources would you use in order for EC2-VPC instances to send traffic directly to Amazon S3? 1. Amazon S3 gateway 2. IGW 3. CGW 4. VPC endpoint

4 <strong>D.</strong><br>An Amazon VPC endpoint enables you to create a private connection between your Amazon VPC and another AWS service without requiring access over the Internet or through a NAT device, VPN connection, or AWS Direct Connect.

You need a secure way to distribute your AWS credentials to an application running on Amazon Elastic Compute Cloud (Amazon EC2) instances in order to access supplementary AWS Cloud services. What approach provides your application access to use short-term credentials for signing requests while protecting those credentials from other users? 1. Add your credentials to the UserData parameter of each Amazon EC2 instance. 2. Use a configuration file to store your access and secret keys on the Amazon EC2 instances. 3. Specify your access and secret keys directly in your application. 4. Provision the Amazon EC2 instances with an instance profile that has the appropriate privileges.

4 <strong>D.</strong><br>An instance profile is a container for an AWS Identity and Access Management (IAM) role that you can use to pass role information to an Amazon EC2 instance when the instance starts. The IAM role should have a policy attached that only allows access to the AWS Cloud services necessary to perform its function.

Which of the following statements is true when it comes to the risk and compliance advantages of the AWS environment? 1. Workloads must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations. 2. The critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the non-critical components do not. 3. The non-critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the critical components do not. 4. Few, many, or all components of a workload can be moved to the AWS Cloud, but it is the customer&rsquo;s responsibility to ensure that their entire workload remains compliant with various certifications and third-party attestations.

4 <strong>D.</strong><br>Any number of components of a workload can be moved into AWS, but it is the customer&rsquo;s responsibility to ensure that the entire workload remains compliant with various certifications and third-party attestations.

Which of the following can be accomplished through bootstrapping? 1. Install the most current security updates. 2. Install the current version of the application. 3. Configure Operating System (OS) services. 4. All of the above.

4 <strong>D.</strong><br>Bootstrapping runs the provided script, so anything you can accomplish in a script you can accomplish during bootstrapping.

You create a new subnet and then add a route to your route table that routes traffic out from that subnet to the Internet using an IGW. What type of subnet have you created? 1. An internal subnet 2. A private subnet 3. An external subnet 4. A public subnet

4 <strong>D.</strong><br>By creating a route out to the Internet using an IGW, you have made this subnet public.

Who is responsible for the configuration of security groups in an AWS environment? 1. The customer and AWS are both jointly responsible for ensuring that security groups are correctly and securely configured. 2. AWS is responsible for ensuring that all security groups are correctly and securely configured. Customers do not need to worry about security group configuration. 3. Neither AWS nor the customer is responsible for the configuration of security groups; security groups are intelligently and automatically configured using traffic heuristics. 4. AWS provides the security group functionality as a service, but the customer is responsible for correctly and securely configuring their own security groups.

4 <strong>D.</strong><br>Customers are responsible for ensuring all of their security group configurations are appropriate for their own applications, therefore answer D is correct.

Which of the following Elastic Load Balancing options ensure that the load balancer determines which cipher is used for a Secure Sockets Layer (SSL) connection? 1. Client Server Cipher Suite 2. Server Cipher Only 3. First Server Cipher 4. Server Order Preference

4 <strong>D.</strong><br>Elastic Load Balancing supports the Server Order Preference option for negotiating connections between a client and a load balancer. During the SSL connection negotiation process, the client and the load balancer present a list of ciphers and protocols that they each support, in order of preference. By default, the first cipher on the client&rsquo;s list that matches any one of the load balancer&rsquo;s ciphers is selected for the SSL connection. If the load balancer is configured to support Server Order Preference, then the load balancer selects the first cipher in its list that is in the client&rsquo;s list of ciphers. This ensures that the load balancer determines which cipher is used for SSL connection. If you do not enable Server Order Preference, the order of ciphers presented by the client is used to negotiate connections between the client and the load balancer.

Which of the following is the security protocol supported by Amazon VPC? 1. SSH 2. Advanced Encryption Standard (AES) 3. Point-to-Point Tunneling Protocol (PPTP) 4. IPsec

4 <strong>D.</strong><br>IPsec is the security protocol supported by Amazon VPC.

Your application polls an Amazon Simple Queue Service (Amazon SQS) queue frequently and returns immediately, often with empty ReceiveMessageResponses. What is one thing that can be done to reduce Amazon SQS costs? 1. Pricing on Amazon SQS does not include a cost for service requests; therefore, there is no concern. 2. Increase the timeout value for short polling to wait for messages longer before returning a response. 3. Change the message visibility value to a higher number. 4. Use long polling by supplying a WaitTimeSeconds of greater than 0 seconds when calling ReceiveMessage.

4 <strong>D.</strong><br>Long polling allows your application to poll the queue, and, if nothing is there, Amazon Elastic Compute Cloud (Amazon EC2) waits for an amount of time you specify (between 1 and 20 seconds). If a message arrives in that time, it is delivered to your application as soon as possible. If a message does not arrive in that time, you need to execute the ReceiveMessage function again.

You are a system administrator whose company has moved its production database to AWS. Your company monitors its estate using Amazon CloudWatch, which sends alarms using Amazon Simple Notification Service (Amazon SNS) to your mobile phone. One night, you get an alert that your primary Amazon Relational Database Service (Amazon RDS) Instance has gone down. You have Multi-AZ enabled on this instance. What should you do to ensure the failover happens quickly? 1. Update your Domain Name System (DNS) to point to the secondary instance&rsquo;s new IP address, forcing your application to fail over to the secondary instance. 2. Connect to your server using Secure Shell (SSH) and update your connection strings so that your application can communicate to the secondary instance instead of the failed primary instance. 3. Take a snapshot of the secondary instance and create a new instance using this snapshot, then update your connection string to point to the new instance. 4. No action is necessary. Your connection string points to the database endpoint, and AWS automatically updates this endpoint to point to your secondary instance.

4 <strong>D.</strong><br>Monitor the environment while Amazon RDS attempts to recover automatically. AWS will update the DB endpoint to point to the secondary instance automatically.

How many VPC Peering connections are required for four VPCs located within the same AWS region to be able to send traffic to each of the others. 1. 3 2. 4 3. 5 4. 6

4 <strong>D.</strong><br>Six VPC Peering connections are needed for each of the four VPCs to send traffic to the other.

Which service allows you to process nearly limitless streams of data in flight? 1. Amazon Kinesis Firehose 2. Amazon Elastic MapReduce (Amazon EMR) 3. Amazon Redshift 4. Amazon Kinesis Streams

4 <strong>D.</strong><br>The Amazon Kinesis services enable you to work with large data streams. Within the Amazon Kinesis family of services, Amazon Kinesis Firehose saves streams to AWS storage services, while Amazon Kinesis Streams provide the ability to process the data in the stream.

Which of the following is the most recent version of the AWS digital signature calculation process? 1. Signature Version 1 2. Signature Version 2 3. Signature Version 3 4. Signature Version 4

4 <strong>D.</strong><br>The Signature Version 4 signing process describes how to add authentication information to AWS requests. For security, most requests to AWS must be signed with an access key (Access Key ID [AKI] and Secret Access Key [SAK]). If you use the AWS Command Line Interface (AWS CLI) or one of the AWS Software Development Kits (SDKs), those tools automatically sign requests for you based on credentials that you specify when you configure the tools. However, if you make direct HTTP or HTTPS calls to AWS, you must sign the requests yourself.

You create an Auto Scaling group in a new region that is configured with a minimum size value of 10, a maximum size value of 100, and a desired capacity value of 50. However, you notice that 30 of the Amazon Elastic Compute Cloud (Amazon EC2) instances within the Auto Scaling group fail to launch. Which of the following is the cause of this behavior? 1. You cannot define an Auto Scaling group larger than 20. 2. The Auto Scaling group maximum value cannot be more than 20. 3. You did not attach an Elastic Load Balancing load balancer to the Auto Scaling group. 4. You have not raised your default Amazon EC2 capacity (20) for the new region.

4 <strong>D.</strong><br>The default Amazon EC2 instance limit for all regions is 20.

What is the longest configurable message retention period for Amazon Simple Queue Service (Amazon SQS)? 1. 30 minutes 2. 4 days 3. 30 seconds 4. 14 days

4 <strong>D.</strong><br>The longest configurable message retention period for Amazon SQS is 14 days.

Your instance is associated with two security groups. The first allows Remote Desktop Protocol (RDP) access over port 3389 from Classless Inter-Domain Routing (CIDR) block 72.14.0.0/16. The second allows HTTP access over port 80 from CIDR block 0.0.0.0/0. What traffic can reach your instance? 1. RDP and HTTP access from CIDR block 0.0.0.0/0 2. No traffic is allowed. 3. RDP and HTTP traffic from 72.14.0.0/16 4. RDP traffic over port 3389 from 72.14.0.0/16 and HTTP traffic over port 80 from 0.0.00/0

4 <strong>D.</strong><br>When there are multiple security groups associated with an instance, all the rules are aggregated.

Which of the following describes the scheme used by an Amazon Redshift cluster leveraging AWS Key Management Service (AWS KMS) to encrypt data-at-rest? 1. Amazon Redshift uses a one-tier, key-based architecture for encryption. 2. Amazon Redshift uses a two-tier, key-based architecture for encryption. 3. Amazon Redshift uses a three-tier, key-based architecture for encryption. 4. Amazon Redshift uses a four-tier, key-based architecture for encryption.

4 <strong>D.</strong><br>When you choose AWS KMS for key management with Amazon Redshift, there is a four-tier hierarchy of encryption keys. These keys are the master key, a cluster key, a database key, and data encryption keys.

Amazon Simple Notification Service (Amazon SNS) is a push notification service that lets you send individual or multiple messages to large numbers of recipients. What types of clients are supported? 1. Java and JavaScript clients that support publisher and subscriber types 2. Producers and consumers supported by C and C++ clients 3. Mobile and AMQP support for publisher and subscriber client types 4. Publisher and subscriber client types

4 <strong>D.</strong><br>With Amazon SNS, you send individual or multiple messages to large numbers of recipients using publisher and subscriber client types.

In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from block and file storage? (Choose 2 answers) 1. Amazon S3 stores data in fixed size blocks. 2. Objects are identified by a numbered address. 3. Objects can be any size. 4. Objects contain both data and metadata. 5. Objects are stored in buckets.

4,5 <strong>D,E.</strong><br>Objects are stored in buckets, and objects contain both data and metadata.

4. Which of the following is a valid report, certification, or third-party attestation for AWS? (Choose 3 answers) A. SOC 1 B. PCI DSS Level 1 C. SOC 4 D. ISO 27001

4. A, B, D. There is no such thing as a SOC 4 report, therefore answer C is incorrect.

4. You have purchased an m3.xlarge Linux Reserved instance in us-east-1a. In which ways can you modify this reservation? (Choose 2 answers) A. Change it into two m3.large instances. B. Change it to a Windows instance. C. Move it to us-east-1b. D. Change it to an m4.xlarge.

4. A, C. You can change the instance type only within the same instance type family, or you can change the Availability Zone. You cannot change the operating system nor the instance type family.

4. Your e-commerce site was designed to be stateless and currently runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. In an effort to control cost and increase availability, you have a requirement to scale the fleet based on CPU and network utilization to match the demand curve for your site. What services do you need to meet this requirement? (Choose 2 answers) A. Amazon CloudWatch B. Amazon DynamoDB C. Elastic Load Balancing D. Auto Scaling E. Amazon Simple Storage Service (Amazon S3)

4. A, D. Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to provision Amazon EC2 capacity manually in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average CPU and network utilization of your Amazon EC2 fleet monitored in Amazon CloudWatch is high; similarly, you can set a condition to remove instances in the same increments when CPU and network utilization are low.

4. Which AWS Cloud service is best suited for Online Analytics Processing (OLAP)? A. Amazon Redshift B. Amazon Relational Database Service (Amazon RDS) C. Amazon Glacier D. Amazon DynamoDB

4. A. Amazon Redshift is best suited for traditional OLAP transactions. While Amazon RDS can also be used for OLAP, Amazon Redshift is purpose-built as an OLAP data warehouse.

4. Which encryption algorithm is used by Amazon Simple Storage Service (Amazon S3) to encrypt data at rest with Service-Side Encryption (SSE)? A. Advanced Encryption Standard (AES)-256 B. RSA 1024 C. RSA 2048 D. AES-128

4. A. Amazon S3 SSE uses one of the strongest block ciphers available, 256-bit AES.

4. Which of the following are the minimum required elements to create an Auto Scaling launch configuration? A. Launch configuration name, Amazon Machine Image (AMI), and instance type B. Launch configuration name, AMI, instance type, and key pair C. Launch configuration name, AMI, instance type, key pair, and security group D. Launch configuration name, AMI, instance type, key pair, security group, and block device mapping

4. A. Only the launch configuration name, AMI, and instance type are needed to create an Auto Scaling launch configuration. Identifying a key pair, security group, and a block device mapping are optional elements for an Auto Scaling launch configuration.

4. How many nodes can you add to an Amazon ElastiCache cluster running Redis? A. 1 B. 5 C. 20 D. 100

4. A. Redis clusters can only contain a single node; however, you can group multiple clusters together into a replication group.

4. What is the default time for an Amazon Simple Queue Service (Amazon SQS) visibility timeout? A. 30 seconds B. 60 seconds C. 1 hour D. 12 hours

4. A. The default time for an Amazon SQS visibility timeout is 30 seconds.

4. Which features can be used to restrict access to Amazon Simple Storage Service (Amazon S3) data? (Choose 3 answers) A. Enable static website hosting on the bucket. B. Create a pre-signed URL for an object. C. Use an Amazon S3 Access Control List (ACL) on a bucket or object. D. Use a lifecycle policy. E. Use an Amazon S3 bucket policy.

4. B, C, E. Static website hosting does not restrict data access, and neither does an Amazon S3 lifecycle policy.

4. Which of the following actions can be authorized by IAM? (Choose 2 answers) A. Installing ASP.NET on a Windows Server B. Launching an Amazon Linux EC2 instance C. Querying an Oracle database D. Adding a message to an Amazon Simple Queue Service (Amazon SQS) queue

4. B, D. IAM controls access to AWS resources only. Installing ASP.NET will require Windows operating system authorization, and querying an Oracle database will require Oracle authorization.

4. You are building a media-sharing web application that serves video files to end users on both PCs and mobile devices. The media files are stored as objects in an Amazon Simple Storage Service (Amazon S3) bucket, but are to be delivered through Amazon CloudFront. What is the simplest way to ensure that only Amazon CloudFront has access to the objects in the Amazon S3 bucket? A. Create Signed URLs for each Amazon S3 object. B. Use an Amazon CloudFront Origin Access Identifier (OAI). C. Use public and private keys with signed cookies. D. Use an AWS Identity and Access Management (IAM) bucket policy.

4. B. Amazon CloudFront OAI is a special identity that can be used to restrict access to an Amazon S3 bucket only to an Amazon CloudFront distribution. Signed URLs, signed cookies, and IAM bucket policies can help to protect content served through Amazon CloudFront, but OAIs are the simplest way to ensure that only Amazon CloudFront has access to a bucket.

4. Which AWS Cloud service allows organizations to gain system-wide visibility into resource utilization, application performance, and operational health? A. AWS Identity and Access Management (IAM) B. Amazon Simple Notification Service (Amazon SNS) C. Amazon CloudWatch D. AWS CloudFormation

4. C. Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications organizations run on AWS. It allows organizations to collect and track metrics, collect and monitor log files, and set alarms. AWS IAM, Amazon SNS, and AWS CloudFormation do not provide visibility into resource utilization, application performance, and the operational health of your AWS resources.

4. Which type of DNS record should you use to resolve an IP address to a domain name? A. An A record B. A C Name C. An SPF record D. A PTR record

4. D. A PTR record is used to resolve an IP address to a domain name, and it is commonly referred to as "reverse DNS."

5. Which of the following are IAM security features? (Choose 2 answers) A. Password policies B. Amazon DynamoDB global secondary indexes C. MFA D. Consolidated Billing

5. A, C. Amazon DynamoDB global secondary indexes are a performance feature of Amazon DynamoDB; Consolidated Billing is an accounting feature allowing all bills to roll up under a single account. While both are very valuable features, neither is a security feature.

5. Which of the following statements is true? A. IT governance is still the customer's responsibility, despite deploying their IT estate onto the AWS platform. B. The AWS platform is PCI DSS-compliant to Level 1. Customers can deploy their web applications to this platform, and they will be PCI DSS-compliant automatically. C. The shared responsibility model applies to IT security only; it does not relate to governance. D. AWS doesn't take risk management very seriously, and it's up to the customer to mitigate risks to the AWS infrastructure.

5. A. IT governance is still the customer's responsibility.

5. An application currently uses Memcached to cache frequently used database queries. Which steps are required to migrate the application to use Amazon ElastiCache with minimal changes? (Choose 2 answers) A. Recompile the application to use the Amazon ElastiCache libraries. B. Update the configuration file with the endpoint for the Amazon ElastiCache cluster. C. Configure a security group to allow access from the application servers. D. Connect to the Amazon ElastiCache nodes using Secure Shell (SSH) and install the latest version of Memcached.

5. B, C. Amazon ElastiCache is Application Programming Interface (API)-compatible with existing Memcached clients and does not require the application to be recompiled or linked against the libraries. Amazon ElastiCache manages the deployment of the Amazon ElastiCache binaries.

5. Your compliance department has mandated a new requirement that all data on Amazon Elastic Block Storage (Amazon EBS) volumes must be encrypted. Which of the following steps would you follow for your existing Amazon EBS volumes to comply with the new requirement? (Choose 3 answers) A. Move the existing Amazon EBS volume into an Amazon Virtual Private Cloud (Amazon VPC). B. Create a new Amazon EBS volume with encryption enabled. C. Modify the existing Amazon EBS volume properties to enable encryption. D. Attach an Amazon EBS volume with encryption enabled to the instance that hosts the data, then migrate the data to the encryption-enabled Amazon EBS volume. E. Copy the data from the unencrypted Amazon EBS volume to the Amazon EBS volume with encryption enabled.

5. B, D, E. There is no direct way to encrypt an existing unencrypted volume. However, you can migrate data between encrypted and unencrypted volumes.

5. Which of the following AWS Cloud services is a fully managed NoSQL database service? A. Amazon Simple Queue Service (Amazon SQS) B. Amazon DynamoDB C. Amazon ElastiCache D. Amazon Relational Database Service (Amazon RDS)

5. B. Amazon DynamoDB is a fully managed, fast, and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Amazon SQS, Amazon ElastiCache, and Amazon RDS do not provide a NoSQL database service. Amazon SQS is a managed message queuing service. Amazon ElastiCache is a service that provides in-memory cache in the cloud. Finally, Amazon RDS provides managed relational databases.

5. You have been using Amazon Relational Database Service (Amazon RDS) for the last year to run an important application with automated backups enabled. One of your team members is performing routine maintenance and accidentally drops an important table, causing an outage. How can you recover the missing data while minimizing the duration of the outage? A. Perform an undo operation and recover the table. B. Restore the database from a recent automated DB snapshot. C. Restore only the dropped table from the DB snapshot. D. The data cannot be recovered.

5. B. DB Snapshots can be used to restore a complete copy of the database at a specific point in time. Individual tables cannot be extracted from a snapshot.

5. You are responsible for the application logging solution for your company's existing applications running on multiple Amazon EC2 instances. Which of the following is the best approach for aggregating the application logs within AWS? A. Amazon CloudWatch custom metrics B. Amazon CloudWatch Logs Agent C. An Elastic Load Balancing listener D. An internal Elastic Load Balancing load balancer

5. B. You can use the Amazon CloudWatch Logs Agent installer on existing Amazon EC2 instances to install and configure the CloudWatch Logs Agent.

5. You host a web application across multiple AWS regions in the world, and you need to configure your DNS so that your end users will get the fastest network performance possible. Which routing policy should you apply? A. Geolocation routing B. Latency-based routing C. Simple routing D. Weighted routing

5. B. You want your users to have the fastest network access possible. To do this, you would use latency-based routing. Geolocation routing would not achieve this as well as latencybased routing, which is specifically geared toward measuring the latency and thus would direct you to the AWS region in which you would have the lowest latency.

5. Your application stores critical data in Amazon Simple Storage Service (Amazon S3), which must be protected against inadvertent or intentional deletion. How can this data be protected? (Choose 2 answers) A. Use cross-region replication to copy data to another bucket automatically. B. Set a vault lock. C. Enable versioning on the bucket. D. Use a lifecycle policy to migrate data to Amazon Glacier. E. Enable MFA Delete on the bucket.

5. C, E. Versioning protects data against inadvertent or intentional deletion by storing all versions of the object, and MFA Delete requires a one-time code from a Multi-Factor Authentication (MFA) device to delete objects. Cross-region replication and migration to the Amazon Glacier storage class do not protect against deletion. Vault locks are a feature of Amazon Glacier, not a feature of Amazon S3.

5. Your company data center is completely full, but the sales group has determined a need to store 200TB of product video. The videos were created over the last several years, with the most recent being accessed by sales the most often. The data must be accessed locally, but there is no space in the data center to install local storage devices to store this data. What AWS cloud service will meet sales' requirements? A. AWS Storage Gateway Gateway-Stored volumes B. Amazon Elastic Compute Cloud (Amazon EC2) instances with attached Amazon EBS Volumes C. AWS Storage Gateway Gateway-Cached volumes D. AWS Import/Export Disk

5. C. AWS Storage Gateway allows you to access data in Amazon S3 locally, with the Gateway-Cached volume configuration allowing you to expand a relatively small amount of local storage into Amazon S3.

5. You create a new subnet and then add a route to your route table that routes traffic out from that subnet to the Internet using an IGW. What type of subnet have you created? A. An internal subnet B. A private subnet C. An external subnet D. A public subnet

5. D. By creating a route out to the Internet using an IGW, you have made this subnet public.

5. What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) visibility timeout? A. 30 seconds B. 60 seconds C. 1 hour D. 12 hours

5. D. The maximum time for an Amazon SQS visibility timeout is 12 hours.

5. Your instance is associated with two security groups. The first allows Remote Desktop Protocol (RDP) access over port 3389 from Classless Inter-Domain Routing (CIDR) block 72.14.0.0/16. The second allows HTTP access over port 80 from CIDR block 0.0.0.0/0. What traffic can reach your instance? A. RDP and HTTP access from CIDR block 0.0.0.0/0 B. No traffic is allowed. C. RDP and HTTP traffic from 72.14.0.0/16 D. RDP traffic over port 3389 from 72.14.0.0/16 and HTTP traffic over port 80 from 0.0.00/0

5. D. When there are multiple security groups associated with an instance, all the rules are aggregated.

*How many copies of my data does RDS - Aurora store by default?* * 1 * 3 * 2 * 6

6

6. Which of the following are features of enhanced networking? (Choose 3 answers) A. More Packets Per Second (PPS) B. Lower latency C. Multiple network interfaces D. Border Gateway Protocol (BGP) routing E. Less jitter

6. A, B, E. These are the benefits of enhanced networking.

6. When building a Distributed Denial of Service (DDoS)-resilient architecture, how does Amazon Virtual Private Cloud (Amazon VPC) help minimize the attack surface area? (Choose 3 answers) A. Reduces the number of necessary Internet entry points B. Combines end user traffic with management traffic C. Obfuscates necessary Internet entry points to the level that untrusted end users cannot access them D. Adds non-critical Internet entry points to the architecture E. Scales the network to absorb DDoS attacks

6. A, C, D. The attack surface is composed of the different Internet entry points that allow access to your application. The strategy to minimize the attack surface area is to (a) reduce the number of necessary Internet entry points, (b) eliminate non-critical Internet entry points, (c) separate end user traffic from management traffic, (d) obfuscate necessary Internet entry points to the level that untrusted end users cannot access them, and (e) decouple Internet entry points to minimize the effects of attacks. This strategy can be accomplished with Amazon VPC.

6. Which Amazon Relational Database Service (Amazon RDS) database engines support Multi-AZ? A. All of them B. Microsoft SQL Server, MySQL, and Oracle C. Oracle, Amazon Aurora, and PostgreSQL D. MySQL

6. A. All Amazon RDS database engines support Multi-AZ deployment.

6. Your company experiences fluctuations in traffic patterns to their e-commerce website based on flash sales. What service can help your company dynamically match the required compute capacity to the spike in traffic during flash sales? A. Auto Scaling B. Amazon Glacier C. Amazon Simple Notification Service (Amazon SNS) D. Amazon Virtual Private Cloud (Amazon VPC)

6. A. Auto Scaling helps maintain application availability and allows organizations to scale Amazon Elastic Compute Cloud (Amazon EC2) capacity up or down automatically according to conditions defined for the particular workload. Not only can it be used to help ensure that the desired number of Amazon EC2 instances are running, but it also allows resources to scale in and out to match the demands of dynamic workloads. Amazon Glacier, Amazon SNS, and Amazon VPC do not provide services to scale compute capacity automatically.

6. What happens when you create a new Amazon VPC? A. A main route table is created by default. B. Three subnets are created by default—one for each Availability Zone. C. Three subnets are created by default in one Availability Zone. D. An IGW is created by default.

6. A. When you create an Amazon VPC, a route table is created by default. You must manually create subnets and an IGW.

6. Which of the following are benefits of using Amazon EC2 roles? (Choose 2 answers) A. No policies are required. B. Credentials do not need to be stored on the Amazon EC2 instance. C. Key rotation is not necessary. D. Integration with Active Directory is automatic.

6. B, C. Amazon EC2 roles must still be assigned a policy. Integration with Active Directory involves integration between Active Directory and IAM via SAML.

6. How can you back up data stored in Amazon ElastiCache running Redis? (Choose 2 answers) A. Create an image of the Amazon Elastic Compute Cloud (Amazon EC2) instance. B. Configure automatic snapshots to back up the cache environment every night. C. Create a snapshot manually. D. Redis clusters cannot be backed up.

6. B, C. Amazon ElastiCache with the Redis engine allows for both manual and automatic snapshots. Memcached does not have a backup function.

6. Which of the following options are valid properties of an Amazon Simple Queue Service (Amazon SQS) message? (Choose 2 answers) A. Destination B. Message ID C. Type D. Body

6. B, D. The valid properties of an SQS message are Message ID and Body. Each message receives a system-assigned Message ID that Amazon SQS returns to you in the SendMessage response. The Message Body is composed of name/value pairs and the unstructured, uninterpreted content.

6. Your company wants to extend their existing Microsoft Active Directory capability into an Amazon Virtual Private Cloud (Amazon VPC) without establishing a trust relationship with the existing on-premises Active Directory. Which of the following is the best approach to achieve this goal? A. Create and connect an AWS Directory Service AD Connector. B. Create and connect an AWS Directory Service Simple AD. C. Create and connect an AWS Directory Service for Microsoft Active Directory (Enterprise Edition). D. None of the above

6. B. Simple AD is a Microsoft Active Directory-compatible directory that is powered by Samba 4. Simple AD supports commonly used Active Directory features such as user accounts, group memberships, domain-joining Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux and Microsoft Windows, Kerberos-based Single Sign-On (SSO), and group policies.

6. Which of the following is the name of the security model employed by AWS with its customers? A. The shared secret model B. The shared responsibility model C. The shared secret key model D. The secret key responsibility model

6. B. The shared responsibility model is the name of the model employed by AWS with its customers.

6. Your company stores documents in Amazon Simple Storage Service (Amazon S3), but it wants to minimize cost. Most documents are used actively for only about a month, then much less frequently. However, all data needs to be available within minutes when requested. How can you meet these requirements? A. Migrate the data to Amazon S3 Reduced Redundancy Storage (RRS) after 30 days. B. Migrate the data to Amazon Glacier after 30 days. C. Migrate the data to Amazon S3 Standard - Infrequent Access (IA) after 30 days. D. Turn on versioning, then migrate the older version to Amazon Glacier.

6. C. Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is correct. Amazon S3 RRS should only be used for easily replicated data, not critical data. Migration to Amazon Glacier might minimize storage costs if retrievals are infrequent, but documents would not be available in minutes when needed.

6. Which of the following must be configured on an Elastic Load Balancing load balancer to accept incoming traffic? A. A port B. A network interface C. A listener D. An instance

6. C. You configure your load balancer to accept incoming traffic by specifying one or more listeners.

6. Which DNS record should you use to configure the transmission of email to your intended mail server? A. SPF records B. A records C. MX records D. SOA record

6. C. You would use Mail eXchange (MX) records to define which inbound destination mail server should be used.

6. Which of the following statements is true when it comes to the risk and compliance advantages of the AWS environment? A. Workloads must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations. B. The critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the non-critical components do not. C. The non-critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the critical components do not. D. Few, many, or all components of a workload can be moved to the AWS Cloud, but it is the customer's responsibility to ensure that their entire workload remains compliant with various certifications and third-party attestations.

6. D. Any number of components of a workload can be moved into AWS, but it is the customer's responsibility to ensure that the entire workload remains compliant with various certifications and third-party attestations.

7. You are creating a High-Performance Computing (HPC) cluster and need very low latency and high bandwidth between instances. What combination of the following will allow this? (Choose 3 answers) A. Use an instance type with 10 Gbps network performance. B. Put the instances in a placement group. C. Use Dedicated Instances. D. Enable enhanced networking on the instances. E. Use Reserved Instances.

7. A, B, D. The other answers have nothing to do with networking.

7. Which of the following are based on temporary security tokens? (Choose 2 answers) A. Amazon EC2 roles B. MFA C. Root user D. Federation

7. A, D. Amazon EC2 roles provide a temporary token to applications running on the instance; federation maps policies to identities from other sources via temporary tokens.

7. How can you secure an Amazon ElastiCache cluster? (Choose 3 answers) A. Change the Memcached root password. B. Restrict Application Programming Interface (API) actions using AWS Identity and Access Management (IAM) policies. C. Restrict network access using security groups. D. Restrict network access using a network Access Control List (ACL).

7. B, C, D. Limit access at the network level using security groups or network ACLs, and limit infrastructure changes using IAM.

7. Which of the following statements best describes an Availability Zone? A. Each Availability Zone consists of a single discrete data center with redundant power and networking/connectivity. B. Each Availability Zone consists of multiple discrete data centers with redundant power and networking/connectivity. C. Each Availability Zone consists of multiple discrete regions, each with a single data center with redundant power and networking/connectivity. D. Each Availability Zone consists of multiple discrete data centers with shared power and redundant networking/connectivity.

7. B. An Availability Zone consists of multiple discrete data centers, each with their own redundant power and networking/connectivity, therefore answer B is correct.

7. How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability? A. Data is automatically replicated to other regions. B. Data is automatically replicated within a region. C. Data is replicated only if versioning is enabled on the bucket. D. Data is automatically backed up on tape and restored if needed.

7. B. Data is automatically replicated within a region. Replication to other regions and versioning are optional. Amazon S3 data is not backed up to tape.

7. Which Amazon Relational Database Service (Amazon RDS) database engines support read replicas? A. Microsoft SQL Server and Oracle B. MySQL, MariaDB, PostgreSQL, and Aurora C. Aurora, Microsoft SQL Server, and Oracle D. MySQL and PostgreSQL

7. B. Read replicas are supported by MySQL, MariaDB, PostgreSQL, and Aurora.

7. Which DNS records are commonly used to stop email spoofing and spam? A. MX records B. SPF records C. A records D. C names

7. B. SPF records are used to verify authorized senders of mail from your domain.

7. You are a solutions architect who is working for a mobile application company that wants to use Amazon Simple Workflow Service (Amazon SWF) for their new takeout ordering application. They will have multiple workflows that will need to interact. What should you advise them to do in structuring the design of their Amazon SWF environment? A. Use multiple domains, each containing a single workflow, and design the workflows to interact across the different domains. B. Use a single domain containing multiple workflows. In this manner, the workflows will be able to interact. C. Use a single domain with a single workflow and collapse all activities to within this single workflow. D. Workflows cannot interact with each other; they would be better off using Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) for their application.

7. B. Use a single domain with multiple workflows. Workflows within separate domains cannot interact.

7. Which of the following are AWS Key Management Service (AWS KMS) keys that will never exit AWS unencrypted? A. AWS KMS data keys B. Envelope encryption keys C. AWS KMS Customer Master Keys (CMKs) D. A and C

7. C. AWS KMS CMKs are the fundamental resources that AWS KMS manages. CMKs can never leave AWS KMS unencrypted, but data keys can.

7. Your e-commerce application provides daily and ad hoc reporting to various business units on customer purchases. This is resulting in an extremely high level of read traffic to your MySQL Amazon Relational Database Service (Amazon RDS) instance. What can you do to scale up read traffic without impacting your database's performance? A. Increase the allocated storage for the Amazon RDS instance. B. Modify the Amazon RDS instance to be a Multi-AZ deployment. C. Create a read replica for an Amazon RDS instance. D. Change the Amazon RDS instance DB engine version.

7. C. Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS instances. This replication feature makes it easy to scale out elastically beyond the capacity constraints of a single Amazon RDS instance for read-heavy database workloads. You can create one or more replicas of a given source Amazon RDS instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

7. You create a new VPC in US-East-1 and provision three subnets inside this Amazon VPC. Which of the following statements is true? A. By default, these subnets will not be able to communicate with each other; you will need to create routes. B. All subnets are public by default. C. All subnets will be able to communicate with each other by default. D. Each subnet will have identical CIDR blocks.

7. C. When you provision an Amazon VPC, all subnets can communicate with each other by default.

7. Your company provides an online photo sharing service. The development team is looking for ways to deliver image files with the lowest latency to end users so the website content is delivered with the best possible performance. What service can help speed up distribution of these image files to end users around the world? A. Amazon Elastic Compute Cloud (Amazon EC2) B. Amazon Route 53 C. AWS Storage Gateway D. Amazon CloudFront

7. D. Amazon CloudFront is a web service that provides a CDN to speed up distribution of your static and dynamic web content—for example, .html, .css, .php, image, and media files—to end users. Amazon CloudFront delivers content through a worldwide network of edge locations. Amazon EC2, Amazon Route 53, and AWS Storage Gateway do not provide CDN services that are required to meet the needs for the photo sharing service.

7. You create an Auto Scaling group in a new region that is configured with a minimum size value of 10, a maximum size value of 100, and a desired capacity value of 50. However, you notice that 30 of the Amazon Elastic Compute Cloud (Amazon EC2) instances within the Auto Scaling group fail to launch. Which of the following is the cause of this behavior? A. You cannot define an Auto Scaling group larger than 20. B. The Auto Scaling group maximum value cannot be more than 20. C. You did not attach an Elastic Load Balancing load balancer to the Auto Scaling group. D. You have not raised your default Amazon EC2 capacity (20) for the new region.

7. D. The default Amazon EC2 instance limit for all regions is 20.

7. Which of the following describes the scheme used by an Amazon Redshift cluster leveraging AWS Key Management Service (AWS KMS) to encrypt data-at-rest? A. Amazon Redshift uses a one-tier, key-based architecture for encryption. B. Amazon Redshift uses a two-tier, key-based architecture for encryption. C. Amazon Redshift uses a three-tier, key-based architecture for encryption. D. Amazon Redshift uses a four-tier, key-based architecture for encryption.

7. D. When you choose AWS KMS for key management with Amazon Redshift, there is a four-tier hierarchy of encryption keys. These keys are the master key, a cluster key, a database key, and data encryption keys.

8. Your security team is very concerned about the vulnerability of the IAM administrator user accounts (the accounts used to configure all IAM features and accounts). What steps can be taken to lock down these accounts? (Choose 3 answers) A. Add multi-factor authentication (MFA) to the accounts. B. Limit logins to a particular U.S. state. C. Implement a password policy on the AWS account. D. Apply a source IP address condition to the policy that only grants permissions when the user is on the corporate network. E. Add a CAPTCHA test to the accounts.

8. A, C, D. Neither B nor E are features supported by IAM.

8. With regard to vulnerability scans and threat assessments of the AWS platform, which of the following statements are true? (Choose 2 answers) A. AWS regularly performs scans of public-facing endpoint IP addresses for vulnerabilities. B. Scans performed by AWS include customer instances. C. AWS security notifies the appropriate parties to remediate any identified vulnerabilities. D. Customers can perform their own scans at any time without advance notice.

8. A, C. AWS regularly scans public-facing, non-customer endpoint IP addresses and notifies appropriate parties. AWS does not scan customer instances, and customers must request the ability to perform their own scans in advance, therefore answers A and C are correct.

8. Your company runs an Amazon Elastic Compute Cloud (Amazon EC2) instance periodically to perform a batch processing job on a large and growing filesystem. At the end of the batch job, you shut down the Amazon EC2 instance to save money but need to persist the filesystem on the Amazon EC2 instance from the previous batch runs. What AWS Cloud service can you leverage to meet these requirements? A. Amazon Elastic Block Store (Amazon EBS) B. Amazon DynamoDB C. Amazon Glacier D. AWS CloudFormation

8. A. Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances on the AWS Cloud. Amazon DynamoDB, Amazon Glacier, and AWS CloudFormation do not provide persistent block-level storage for Amazon EC2 instances. Amazon DynamoDB provides managed NoSQL databases. Amazon Glacier provides lowcost archival storage. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.

8. You want to host multiple Hypertext Transfer Protocol Secure (HTTPS) websites on a fleet of Amazon EC2 instances behind an Elastic Load Balancing load balancer with a single X.509 certificate. How must you configure the Secure Sockets Layer (SSL) certificate so that clients connecting to the load balancer are not presented with a warning when they connect? A. Create one SSL certificate with a Subject Alternative Name (SAN) value for each website name. B. Create one SSL certificate with the Server Name Indication (SNI) value checked. C. Create multiple SSL certificates with a SAN value for each website name. D. Create SSL certificates for each Availability Zone with a SAN value for each website name.

8. A. An SSL certificate must specify the name of the website in either the subject name or listed as a value in the SAN extension of the certificate in order for connecting clients to not receive a warning.

8. Your website is hosted on a fleet of web servers that are load balanced across multiple Availability Zones using an Elastic Load Balancer (ELB). What type of record set in Amazon Route 53 can be used to point myawesomeapp.com to your website? A. Type A Alias resource record set B. MX record set C. TXT record set D. CNAME record set

8. A. An alias resource record set can point to an ELB. You cannot create a CNAME record at the top node of a Domain Name Service (DNS) namespace, also known as the zone apex, as the case in this example. Alias resource record sets can save you time because Amazon Route 53 automatically recognizes changes in the resource record sets to which the alias resource record set refers.

8. Your team is building an order processing system that will span multiple Availability Zones. During testing, the team wanted to test how the application will react to a database failover. How can you enable this type of test? A. Force a Multi-AZ failover from one Availability Zone to another by rebooting the primary instance using the Amazon RDS console. B. Terminate the DB instance, and create a new one. Update the connection string. C. Create a support case asking for a failover. D. It is not possible to test a failover.

8. A. You can force a failover from one Availability Zone to another by rebooting the primary instance in the AWS Management Console. This is often how people test a failover in the real world. There is no need to create a support case.

8. How many IGWs can you attach to an Amazon VPC at any one time? A. 1 B. 2 C. 3 D. 4

8. A. You may only have one IGW for each Amazon VPC.

8. You are rolling out A and B test versions of a web application to see which version results in the most sales. You need 10 percent of your traffic to go to version A, 10 percent to go to version B, and the rest to go to your current production version. Which routing policy should you choose to achieve this? A. Simple routing B. Weighted routing C. Geolocation routing D. Failover routing

8. B. Weighted routing would best achieve this objective because it allows you to specify which percentage of traffic is directed to each endpoint.

8. You are working on a mobile gaming application and are building the leaderboard feature to track the top scores across millions of users. Which AWS services are best suited for this use case? A. Amazon Redshift B. Amazon ElastiCache using Memcached C. Amazon ElastiCache using Redis D. Amazon Simple Storage Service (S3)

8. C. Amazon ElastiCache with Redis provides native functions that simplify the development of leaderboards. With Memcached, it is more difficult to sort and rank large datasets. Amazon Redshift and Amazon S3 are not designed for high volumes of small reads and writes, typical of a mobile game.

8. Which Amazon Elastic Compute Cloud (Amazon EC2) feature ensures that your instances will not share a physical host with instances from any other AWS customer? A. Amazon Virtual Private Cloud (VPC) B. Placement groups C. Dedicated Instances D. Reserved Instances

8. C. Dedicated Instances will not share hosts with other accounts.

8. Based on the following Amazon Simple Storage Service (Amazon S3) URL, which one of the following statements is correct? https://bucket1.abc.com.s3.amazonaws.com/folderx/myfile.doc A. The object "myfile.doc" is stored in the folder "folderx" in the bucket "bucket1.abc.com." B. The object "myfile.doc" is stored in the bucket "bucket1.abc.com." C. The object "folderx/myfile.doc" is stored in the bucket "bucket1.abc.com." D. The object "myfile.doc" is stored in the bucket "bucket1."

8. C. In a URL, the bucket name precedes the string "s3.amazonaws.com/," and the object key is everything after that. There is no folder structure in Amazon S3.

8. Which cryptographic method is used by AWS Key Management Service (AWS KMS) to encrypt data? A. Password-based encryption B. Asymmetric C. Shared secret D. Envelope encryption

8. D. AWS KMS uses envelope encryption to protect data. AWS KMS creates a data key, encrypts it under a Customer Master Key (CMK), and returns plaintext and encrypted versions of the data key to you. You use the plaintext key to encrypt data and store the encrypted key alongside the encrypted data. You can retrieve a plaintext data key only if you have the encrypted data key and you have permission to use the corresponding master key.

8. Which of the following Elastic Load Balancing options ensure that the load balancer determines which cipher is used for a Secure Sockets Layer (SSL) connection? A. Client Server Cipher Suite B. Server Cipher Only C. First Server Cipher D. Server Order Preference

8. D. Elastic Load Balancing supports the Server Order Preference option for negotiating connections between a client and a load balancer. During the SSL connection negotiation process, the client and the load balancer present a list of ciphers and protocols that they each support, in order of preference. By default, the first cipher on the client's list that matches any one of the load balancer's ciphers is selected for the SSL connection. If the load balancer is configured to support Server Order Preference, then the load balancer selects the first cipher in its list that is in the client's list of ciphers. This ensures that the load balancer determines which cipher is used for SSL connection. If you do not enable Server Order Preference, the order of ciphers presented by the client is used to negotiate connections between the client and the load balancer.

9. Which AWS service records Application Program Interface (API) calls made on your account and delivers log files to your Amazon Simple Storage Service (Amazon S3) bucket? A. AWS CloudTrail B. Amazon CloudWatch C. Amazon Kinesis D. AWS Data Pipeline

9. A. AWS CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS Cloud service.

9. You have built a large web application that uses Amazon ElastiCache using Memcached to store frequent query results. You plan to expand both the web fleet and the cache fleet multiple times over the next year to accommodate increased user traffic. How do you minimize the amount of changes required when a scaling event occurs? A. Configure AutoDiscovery on the client side B. Configure AutoDiscovery on the server side C. Update the configuration file each time a new cluster D. Use an Elastic Load Balancer to proxy the requests

9. A. When the clients are configured to use AutoDiscovery, they can discover new cache nodes as they are added or removed. AutoDiscovery must be configured on each client and is not active server side. Updating the configuration file each time will be very difficult to manage. Using an Elastic Load Balancer is not recommended for this scenario.

9. You want to grant the individuals on your network team the ability to fully manipulate Amazon EC2 instances. Which of the following accomplish this goal? (Choose 2 answers) A. Create a new policy allowing EC2:* actions, and name the policy NetworkTeam. B. Assign the managed policy, EC2FullAccess, to a group named NetworkTeam, and assign all the team members' IAM user accounts to that group. C. Create a new policy that grants EC2:* actions on all resources, and assign that policy to each individual's IAM user account on the network team. D. Create a NetworkTeam IAM group, and have each team member log in to the AWS Management Console using the user name/password for the group.

9. B, C. Access requires an appropriate policy associated with a principal. Response A is merely a policy with no principal, and response D is not a principal as IAM groups do not have user names and passwords. Response B is the best solution; response C will also work but it is much harder to manage.

9. Which of the following are true of instance stores? (Choose 2 answers) A. Automatic backups B. Data is lost when the instance stops. C. Very high IOPS D. Charge is based on the total amount of storage provisioned.

9. B, C. Instance stores are low-durability, high-IOPS storage that is included for free with the hourly cost of an instance.

9. Which of the following best describes the risk and compliance communication responsibilities of customers to AWS? A. AWS and customers both communicate their security and control environment information to each other at all times. B. AWS publishes information about the AWS security and control practices online, and directly to customers under NDA. Customers do not need to communicate their use and configurations to AWS. C. Customers communicate their use and configurations to AWS at all times. AWS does not communicate AWS security and control practices to customers for security reasons. D. Both customers and AWS keep their security and control practices entirely confidential and do not share them in order to ensure the greatest security for all parties.

9. B. AWS publishes information publicly online and directly to customers under NDA, but customers are not required to share their use and configuration information with AWS, therefore answer B is correct.

9. You are designing a new application, and you need to ensure that the components of your application are not tightly coupled. You are trying to decide between the different AWS Cloud services to use to achieve this goal. Your requirements are that messages between your application components may not be delivered more than once, tasks must be completed in either a synchronous or asynchronous fashion, and there must be some form of application logic that decides what do when tasks have been completed. What application service should you use? A. Amazon Simple Queue Service (Amazon SQS) B. Amazon Simple Workflow Service (Amazon SWF) C. Amazon Simple Storage Service (Amazon S3) D. Amazon Simple Email Service (Amazon SES)

9. B. Amazon SWF would best serve your purpose in this scenario because it helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

9. To have a record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from where, you should do what? A. Enable versioning on the bucket. B. Enable website hosting on the bucket. C. Enable server access logs on the bucket. D. Create an AWS Identity and Access Management (IAM) bucket policy. E. Enable Amazon CloudWatch logs.

9. C. Amazon S3 server access logs store a record of what requestor accessed the objects in your bucket, including the requesting IP address.

9. What AWS Cloud service provides a logically isolated section of the AWS Cloud where organizations can launch AWS resources in a virtual network that they define? A. Amazon Simple Workflow Service (Amazon SWF) B. Amazon Route 53 C. Amazon Virtual Private Cloud (Amazon VPC) D. AWS CloudFormation

9. C. Amazon VPC lets organizations provision a logically isolated section of the AWS Cloud where they can launch AWS resources in a virtual network that they define. Amazon SWF, Amazon Route 53, and AWS CloudFormation do not provide a virtual network. Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. Amazon Route 53 provides a highly available and scalable cloud Domain Name System (DNS) web service. Amazon CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.

9. Which technology does Amazon WorkSpaces use to provide data security? A. Secure Sockets Layer (SSL)/Transport Layer Security (TLS) B. Advanced Encryption Standard (AES)-256 C. PC-over-IP (PCoIP) D. AES-128

9. C. Amazon WorkSpaces uses PCoIP, which provides an interactive video stream without transmitting actual data.

9. Your web application front end consists of multiple Amazon Compute Cloud (Amazon EC2) instances behind an Elastic Load Balancing load balancer. You have configured the load balancer to perform health checks on these Amazon EC2 instances. If an instance fails to pass health checks, which statement will be true? A. The instance is replaced automatically by the load balancer. B. The instance is terminated automatically by the load balancer. C. The load balancer stops sending traffic to the instance that failed its health check. D. The instance is quarantined by the load balancer for root cause analysis.

9. C. When Amazon EC2 instances fail the requisite number of consecutive health checks, the load balancer stops sending traffic to the Amazon EC2 instance.

9. You need a secure way to distribute your AWS credentials to an application running on Amazon Elastic Compute Cloud (Amazon EC2) instances in order to access supplementary AWS Cloud services. What approach provides your application access to use short-term credentials for signing requests while protecting those credentials from other users? A. Add your credentials to the UserData parameter of each Amazon EC2 instance. B. Use a configuration file to store your access and secret keys on the Amazon EC2 instances. C. Specify your access and secret keys directly in your application. D. Provision the Amazon EC2 instances with an instance profile that has the appropriate privileges.

9. D. An instance profile is a container for an AWS Identity and Access Management (IAM) role that you can use to pass role information to an Amazon EC2 instance when the instance starts. The IAM role should have a policy attached that only allows access to the AWS Cloud services necessary to perform its function.

9. You are a system administrator whose company has moved its production database to AWS. Your company monitors its estate using Amazon CloudWatch, which sends alarms using Amazon Simple Notification Service (Amazon SNS) to your mobile phone. One night, you get an alert that your primary Amazon Relational Database Service (Amazon RDS) Instance has gone down. You have Multi-AZ enabled on this instance. What should you do to ensure the failover happens quickly? A. Update your Domain Name System (DNS) to point to the secondary instance's new IP address, forcing your application to fail over to the secondary instance. B. Connect to your server using Secure Shell (SSH) and update your connection strings so that your application can communicate to the secondary instance instead of the failed primary instance. C. Take a snapshot of the secondary instance and create a new instance using this snapshot, then update your connection string to point to the new instance. D. No action is necessary. Your connection string points to the database endpoint, and AWS automatically updates this endpoint to point to your secondary instance.

9. D. Monitor the environment while Amazon RDS attempts to recover automatically. AWS will update the DB endpoint to point to the secondary instance automatically.

9. Which DNS record must all zones have by default? A. SPF B. TXT C. MX D. SOA

9. D. The start of a zone is defined by the SOA; therefore, all zones must have an SOA record by default.

*What workloads can you deploy using Elastic Beanstalk?* (Choose two.) * A static web site * Storing data for data lake for big data processing * A long-running job that runs overnight * A web application

A static web site, A web application * You can deploy a web application or a static web site using Elastic Beanstalk Elastic Beanstalk can't be used for storing a data lake's data. If you have a long-running job that runs overnight, you can use AWS Batch.

A company is storing data on Amazon Simple Storage Service (S3). The company's security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? (Choose three.) A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. B. Use Amazon S3 server-side encryption with customer-provided keys. C. Use Amazon S3 server-side encryption with EC2 key pair. D. Use Amazon S3 bucket policies to restrict access to the data at rest. E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key. F. Use SSL to encrypt the data while in transit to Amazon S3.

A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. B. Use Amazon S3 server-side encryption with customer-provided keys. E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.

What is the default maximum number of MFA devices in use per AWS account (at the root account level)? A. 1 B. 5 C. 15 D. 10

A. 1

In the Amazon RDS which uses the SQL Server engine, what is the maximum size for a Microsoft SQL Server DB Instance with SQL Server Express edition? A. 10 GB per DB B. 100 GB per DB C. 2 TB per DB D. 1TB per DB

A. 10 GB per DB

How many types of block devices does Amazon EC2 support? A. 2 B. 3 C. 4 D. 1

A. 2

MySQL installations default to port _____. A. 3306 B. 443 C. 80 D. 1158

A. 3306

Using Amazon CloudWatch's Free Tier, what is the frequency of metric updates which you receive? A. 5 minutes B. 500 milliseconds. C. 30 seconds D. 1 minute

A. 5 minutes

What is the durability of S3 RRS? A. 99.99% B. 99.95% C. 99.995% D. 99.999999999%

A. 99.99%

What is the durability of S3 RRS? A. 99.99% B. 99.95% C. 99.995% D. 99.999999999%

A. 99.99%

When you put objects in Amazon S3, what is the indication that an object was successfully stored? A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful. B. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted. C. A success code is inserted into the S3 object metadata. D. Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.

A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.

A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? (Select 2) A. AWS Directory Service AD Connector B. AWS Directory Service Simple AD C. AWS Identity and Access Management groups D. AWS identity and Access Management roles E. AWS identity and Access Management users

A. AWS Directory Service AD Connector D. AWS identity and Access Management roles

Which of the following services natively encrypts data at rest within an AWS region? (Choose two.) A. AWS Storage Gateway B. Amazon DynamoDB C. Amazon CloudFront D. Amazon Glacier E. Amazon Simple Queue Service

A. AWS Storage Gateway D. Amazon Glacier

HTTP Query-based requests are HTTP requests that use the HTTP verb GET or POST and a Query parameter named _____. A. Action B. Value C. Reset D. Retrieve

A. Action

Select the correct set of options. The initial settings for the default security group are: A. Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other B. Allow all inbound traffic, Allow no outbound traffic and Allow instances associated with this security group to talk to each other C. Allow no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other D. Allow all inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other

A. Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other

What is one key difference between an Amazon EBS-backed and an instance-store backed instance? A. Amazon EBS-backed instances can be stopped and restarted. B. Instance-store backed instances can be stopped and restarted. C. Auto scaling requires using Amazon EBS-backed instances. D. Virtual Private Cloud requires EBS backed instances.

A. Amazon EBS-backed instances can be stopped and restarted.

You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion? A. Amazon Kinesis B. AWS Data Pipeline C. Amazon AppStream D. Amazon Simple Queue Service

A. Amazon Kinesis

Which of the following is a durable key-value store? A. Amazon Simple Storage Service B. Amazon Simple Workflow Service C. Amazon Simple Queue Service D. Amazon Simple Notification Service

A. Amazon Simple Storage Service

Which Amazon service can I use to define a virtual network that closely resembles a traditional data center? A. Amazon VPC B. Amazon ServiceBus C. Amazon EMR D. Amazon RDS

A. Amazon VPC

You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose three.) A. An AWS Direct Connect link between the VPC and the network housing the internal services. B. An Internet Gateway to allow a VPN connection. C. An Elastic IP address on the VPC instance D. An IP address space that does not conflict with the one on-premises E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses F. A VM Import of the current virtual machine

A. An AWS Direct Connect link between the VPC and the network housing the internal services. D. An IP address space that does not conflict with the one on-premises F. A VM Import of the current virtual machine

Which of the following statements are true about Amazon Route 53 resource records? Choose 2 answers A. An Alias record can map one DNS name to another Amazon Route 53 DNS name. B. A CNAME record can be created for your zone apex. C. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere. D. TTL can be set for an Alias record in Amazon Route 53. E. An Amazon Route 53 Alias record can point to any DNS record hosted anywhere.

A. An Alias record can map one DNS name to another Amazon Route 53 DNS name. C. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere.

What does Amazon Elastic Beanstalk provide? A. An application container on top of Amazon Web Services. B. A scalable storage appliance on top of Amazon Web Services. C. A scalable cluster of EC2 instances. D. A service by this name doesn't exist.

A. An application container on top of Amazon Web Services.

EBS Snapshots occur _____ A. Asynchronously B. Synchronously C. Weekly

A. Asynchronously

Regarding the attaching of ENI to an instance, what does 'warm attach' refer to? A. Attaching an ENI to an instance when it is stopped. B. This question doesn't make sense. C. Attaching an ENI to an instance when it is running D. Attaching an ENI to an instance during the launch process

A. Attaching an ENI to an instance when it is stopped.

What is the type of monitoring data (for Amazon EBS volumes) which is available automatically in 5-minute periods at no charge called? A. Basic B. Primary C. Detailed D. Local

A. Basic

What are the four levels of AWS Premium Support? A. Basic, Developer, Business, Enterprise B. Basic, Startup, Business, Enterprise C. Free, Bronze, Silver, Gold D. All support is free

A. Basic, Developer, Business, Enterprise

What is the name of licensing model in which I can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS? A. Bring Your Own License B. Role Bases License C. Enterprise License D. License Included

A. Bring Your Own License

How can the domain's zone apex, for example, "myzoneapexdomain.com", be pointed towards an Elastic Load Balancer? A. By using an Amazon Route 53 Alias record B. By using an AAAA record C. By using an Amazon Route 53 CNAME record D. By using an A record

A. By using an Amazon Route 53 Alias record

When creation of an EBS snapshot is initiated, but not completed, the EBS volume: A. Can be used while the snapshot is in progress. B. Cannot be detached or attached to an EC2 instance until the snapshot completes C. Can be used in read-only mode while the snapshot is in progress. D. Cannot be used until the snapshot completes.

A. Can be used while the snapshot is in progress.

Which of the following are true regarding AWS CloudTrail? (Choose three.) A. CloudTrail is enabled globally B. CloudTrail is enabled by default C. CloudTrail is enabled on a per-region basis D. CloudTrail is enabled on a per-service basis. E. Logs can be delivered to a single Amazon S3 bucket for aggregation. F. CloudTrail is enabled for all available services within a region. G. Logs can only be processed and delivered to the region in which they are generated.

A. CloudTrail is enabled globally C. CloudTrail is enabled on a per-region basis E. Logs can be delivered to a single Amazon S3 bucket for aggregation. Not all services support cloudtrail. They can go to a bucket in any region.

You are designing an SSUTLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure? (Choose two.) A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it. B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. C. Configure ELB with HTTPS listeners, and place the Web servers behind it. D. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.

A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it. B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.

You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CONs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet. Which of the following options would you consider? A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. B. Implement security groups and configure outbound rules to only permit traffic to software depots. C. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only. D. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.

A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.

In order to enable encryption at rest using EC2 and Elastic Block Store you need to A. Configure encryption when creating the EBS volume B. Configure encryption using the appropriate Operating Systems file system C. Configure encryption using X.509 certificates D. Mount the EBS volume in to S3 and then encrypt the bucket using a bucket policy.

A. Configure encryption when creating the EBS volume

You are the new IT architect in a company that operates a mobile sleep tracking application When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (The mobile app Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost what would you recommend? (Choose two.) A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3. B. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3. C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput. D. Introduce Amazon Elasticache allow cache reads from the Amazon DynamoDB table and reduce provisioned read throughput. E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3. D. Introduce Amazon Elasticache allow cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.

You currently operate a web application In the AWS US-East region The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2.IAM And RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend? A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. B. Create a new CloudTrail with one new S3 bucket to store the logs Configure SNS to send log file delivery notifications to your management system Use IAM roles and S3 bucket policies on the S3 bucket mat stores your logs. C. Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. D. Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.

A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.

A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the following options helps the company accomplish this? A. Create a new peering connection Between Prod and Dev along with appropriate routes. B. Create a new entry to Prod in the Dev route table using the peering connection as the target. C. Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target. D. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.

A. Create a new peering connection Between Prod and Dev along with appropriate routes. Connections are not transitive between VPCs.

Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. (Choose two.) A. Create an IAM Role that allows write access to the DynamoDB table. B. Add an IAM Role to a running EC2 instance. C. Create an IAM User that allows write access to the DynamoDB table. D. Add an IAM User to a running EC2 instance. E. Launch an EC2 Instance with the IAM Role included in the launch configuration.

A. Create an IAM Role that allows write access to the DynamoDB table. B. Add an IAM Role to a running EC2 instance. A and E if prior to 2017

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publicly accessible from S3 directly? A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. B. Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy. C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User. D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publicly accessible from S3 directly? A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. B. Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy. C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User. D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag _____ to false when you launch the instance. A. DeleteOnTermination B. RemoveOnDeletion C. RemoveOnTermination D. TerminateOnDeletion

A. DeleteOnTermination

Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose two.) A. Deploy ElasticCache in-memory cache running in each availability zone B. Implement sharding to distribute load to multiple RDS MySQL instances C. Increase the RDS MySQL Instance size and Implement provisioned IOPS D. Add an RDS MySQL read replica in each availability zone

A. Deploy ElasticCache in-memory cache running in each availability zone D. Add an RDS MySQL read replica in each availability zone

You work for a cosmetic company which has their production website on AWS. The site itself is in a two-tier configuration with web servers in the front end and database servers at the back end. The site uses using Elastic Load Balancing and Auto Scaling. The databases maintain consistency by replicating changes to each other as and when they occur. This requires the databases to have extremely low latency. Your website needs to be highly redundant and must be designed so that if one availability zone goes offline and Auto Scaling cannot launch new instances in the remaining Availability Zones the site will not go offline. How can the current architecture be enhanced to ensure this? A. Deploy your site in three different AZ's within the same region. Configure the Auto Scaling minimum to handle 50 percent of the peak load per zone. B. Deploy your website in 2 different regions. Configure Route53 with a failover routing policy and set up health checks on the primary site. C. Deploy your site in three different AZ's within the same region. Configure the Auto Scaling minimum to handle 33 percent of the peak load per zone. D. Deploy your website in 2 different regions. Configure Route53 with Weighted Routing. Assign a weight of 25% to region 1 and a weight of 75% to region 2.

A. Deploy your site in three different AZ's within the same region. Configure the Auto Scaling minimum to handle 50 percent of the peak load per zone.

Which route must be added to your routing table in order to allow connections to the Internet from your subnet? A. Destination: 0.0.0.0/0 --> Target: your Internet gateway B. Destination: 192.168.1.257/0 --> Target: your Internet gateway C. Destination: 0.0.0.0/33 --> Target: your virtual private gateway D. Destination: 0.0.0.0/0 --> Target: 0.0.0.0/24 E. Destination: 10.0.0.0/32 --> Target: your virtual private gateway

A. Destination: 0.0.0.0/0 --> Target: your Internet gateway

After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful. Which of the following steps could resolve the issue? A. Disabling the Source/Destination Check attribute on the NAT instance B. Attaching an Elastic IP address to the instance in the private subnet C. Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet D. Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet

A. Disabling the Source/Destination Check attribute on the NAT instance

What does Amazon EBS stand for? A. Elastic Block Storage. B. Elastic Business Server. C. Elastic Blade Server. D. Elastic Block Store.

A. Elastic Block Storage.

If I want an instance to have a public IP address, which IP address should I use? A. Elastic IP Address B. Class B IP Address C. Class A IP Address D. Dynamic IP Address

A. Elastic IP Address

If I want an instance to have a public IP address, which IP address should I use? A. Elastic IP Address B. Class B IP Address C. Class A IP Address D. Dynamic IP Address

A. Elastic IP Address

A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier? A. Elastic Load Balancing, Amazon EC2, and Auto Scaling B. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3 C. Amazon RDS with Multi-AZ and Auto Scaling D. Amazon EC2, Amazon DynamoDB, and Amazon S3

A. Elastic Load Balancing, Amazon EC2, and Auto Scaling

Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? (Choose two.) A. Email B. CloudFront distribution C. File Transfer Protocol D. Short Message Service E. Simple Network Management Protocol

A. Email D. Short Message Service

You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard'? A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job. B. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job D. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job. E. Use Elastic Beanstalk 'Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.

A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.

What combination of the following options will protect S3 objects from both accidental deletion and accidental overwriting? A. Enable S3 versioning on the bucket. B. Access S3 data using only signed URLs. C. Disable S3 delete using an IAM bucket policy. D. Enable S3 Reduced Redundancy Storage. E. Enable multi-factor authentication (MFA) protected access.

A. Enable S3 versioning on the bucket. E. Enable multi-factor authentication (MFA) protected access.

What combination of the following options will protect S3 objects from both accidental deletion and accidental overwriting? A. Enable S3 versioning on the bucket. B. Access S3 data using only signed URLs. C. Disable S3 delete using an IAM bucket policy. D. Enable S3 Reduced Redundancy Storage. E. Enable multi-factor authentication (MFA) protected access.

A. Enable S3 versioning on the bucket. E. Enable multi-factor authentication (MFA) protected access.

Your company has decided to set up a new AWS account for test and dev purposes. They already use AWS for production, but would like a new account dedicated for test and dev so as to not accidentally break the production environment. You launch an exact replica of your production environment using a CloudFormation template that your company uses in production. However CloudFormation fails. You use the exact same CloudFormation template in production, so the failure is something to do with your new AWS account. The CloudFormation template is trying to launch 60 new EC2 instances in a single AZ. After some research you discover that the problem is; A. For all new AWS accounts there is a soft limit of 20 EC2 instances per region. You should submit the limit increase form and retry the template after your limit has been increased. B. For all new AWS accounts there is a soft limit of 20 EC2 instances per availability zone. You should submit the limit increase form and retry the template after your limit has been increased. C. You cannot launch more than 20 instances in your default VPC, instead reconfigure the CloudFormation template to provision the instances in a custom VPC. D. Your CloudFormation template is configured to use the parent account and not the new account. Change the account number in the CloudFormation template and relaunch the template. Submit

A. For all new AWS accounts there is a soft limit of 20 EC2 instances per region. You should submit the limit increase form and retry the template after your limit has been increased.

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements? A. Gateway-Cached volumes with snapshots scheduled to Amazon S3 B. Gateway-Stored volumes with snapshots scheduled to Amazon S3 C. Gateway-Virtual Tape Library with snapshots to Amazon S3 D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

A. Gateway-Cached volumes with snapshots scheduled to Amazon S3

Which of the following is not a valid configuration type for AWS Storage gateway. A. Gateway-accessed volumes B. Gateway-cached volumes C. Gateway-stored volumes D. Gateway-Virtual Tape Library

A. Gateway-accessed volumes

Which of the following instance types are available as Amazon EBS-backed only? (Choose two.) A. General purpose T2 B. General purpose M3 C. Compute-optimized C4 D. Compute-optimized C3 E. Storage-optimized 12

A. General purpose T2 C. Compute-optimized C4

A VPC public subnet is one that: A. Has at least one route in its associated routing table that uses an Internet Gateway (IGW). B. Includes a route in its associated routing table via a Network Address Translation (NAT) instance. C. Has a Network Access Control List (NACL) permitting outbound traffic to 0.0.0.0/0. D. Has the Public Subnet option selected in its configuration.

A. Has at least one route in its associated routing table that uses an Internet Gateway (IGW).

What happens to the I/O operations while you take a database snapshot in a single AZ database? A. I/O operations to the database are suspended for a few minutes while the backup is in progress. B. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress. C. I/O operations will be functioning normally D. I/O operations to the database are suspended for an hour while the backup is in progress

A. I/O operations to the database are suspended for a few minutes while the backup is in progress.

When should I choose Provisioned IOPS over Standard RDS storage? A. If you use production online transaction processing (OLTP) workloads. B. If you have batch-oriented workloads C. If you have workloads that are not sensitive to consistent performance

A. If you use production online transaction processing (OLTP) workloads.

You have an EC2 Security Group with several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. The new rules apply: A. Immediately to all instances in the security group. B. Immediately to the new instances only. C. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply. D. To all instances, but it may take several minutes for old instances to see the changes.

A. Immediately to all instances in the security group.

You have an EC2 security group with several running EC2 instances. You change the security group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same security group. The new rules apply: A. Immediately to all instances in the security group. B. Immediately to the new instances only. C. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply. D. To all instances, but it may take several minutes for old instances to see the changes.

A. Immediately to all instances in the security group.

You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IOS IPS protection for traffic coming from the Internet. Which of the following options would you consider? (Choose two.) A. Implement IDS/IPS agents on each Instance running In VPC B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic. C. Implement Elastic Load Balancing with SSL listeners In front of the web applications D. Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server.

A. Implement IDS/IPS agents on each Instance running In VPC D. Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server.

Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose three.) A. Implement third party volume encryption tools B. Do nothing as EBS volumes are encrypted by default C. Encrypt data inside your applications before storing it on EBS D. Encrypt data using native data encryption drivers at the file system level E. Implement SSL/TLS for all services running on the server

A. Implement third party volume encryption tools C. Encrypt data inside your applications before storing it on EBS D. Encrypt data using native data encryption drivers at the file system level

By definition a public subnet within a VPC is one that; A. In it's routing table it has at least one route that uses an Internet Gateway (IGW). B. Has at least one route in it's routing table that routes via a Network Address Translation (NAT) instance. C. Where the the Network Access Control List (NACL) permitting outbound traffic to 0.0.0.0/0. D. Has had the public subnet check box ticked when setting up this subnet in the VPC console.

A. In it's routing table it has at least one route that uses an Internet Gateway (IGW).

You are a student currently learning about the different AWS services. Your employer asks you to tell him a bit about Amazon's glacier service. Which of the following best describes the use cases for Glacier? A. Infrequently accessed data & data archives B. Hosting active databases C. Replicating Files across multiple availability zones and regions D. Frequently Accessed Data

A. Infrequently accessed data & data archives

You are a student currently learning about the different AWS services. Your employer asks you to tell him a bit about Amazon's glacier service. Which of the following best describes the use cases for Glacier? A. Infrequently accessed data & data archives B. Hosting active databases C. Replicating Files across multiple availability zones and regions D. Frequently Accessed Data

A. Infrequently accessed data & data archives

Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine A. InnoDB B. MyISAM

A. InnoDB

What does the AWS Storage Gateway provide? A. Integration of on-premises IT environments with Cloud Storage. B. A direct encrypted connection to Amazon S3. C. A backup solution that provides an on-premises Cloud storage. D. It provides an encrypted SSL endpoint for backups in the Cloud.

A. Integration of on-premises IT environments with Cloud Storage.

Which DNS name can only be resolved within Amazon EC2? A. Internal DNS name B. External DNS name C. Global DNS name D. Private DNS name

A. Internal DNS name

Which of the following are characteristics of a reserved instance? (Choose three.) A. It can be migrated across Availability Zones B. It is specific to an Amazon Machine Image (AMI) C. It can be applied to instances launched by Auto Scaling D. It is specific to an instance Type E. It can be used to lower Total Cost of Ownership (TCO) of a system

A. It can be migrated across Availability Zones C. It can be applied to instances launched by Auto Scaling E. It can be used to lower Total Cost of Ownership (TCO) of a system

What is the Reduced Redundancy option in Amazon S3? A. Less redundancy for a lower cost. B. It doesn't exist in Amazon S3, but in Amazon EBS. C. It allows you to destroy any copy of your files outside a specific jurisdiction. D. It doesn't exist at all

A. Less redundancy for a lower cost.

Which of the following requires a custom CloudWatch metric to monitor? A. Memory use B. CPU use C. Disk read operations D. Network in E. Estimated charges

A. Memory use

Can I move a Reserved Instance from one Region to another? A. No B. Yes C. Only if they are moving into GovCloud D. Only if they are moving to US East from another region

A. No

Can an EBS volume be attached to more than one EC2 instance at the same time? A. No B. Yes. C. Only EC2-optimized EBS volumes. D. Only in read mode.

A. No

Can the string value of 'Key' be prefixed with ":aws:"? A. No B. Only for EC2 not S3 C. Yes D. Only for S3 not EC2

A. No

Does Amazon RDS for SQL Server currently support importing data into the msdb database? A. No B. Yes

A. No

Is the SQL Server Audit feature supported in the Amazon RDS SQL Server engine? A. No B. Yes

A. No

Making your snapshot public shares all snapshot data with everyone. Can the snapshots with AWS Marketplace product codes be made public? A. No B. Yes

A. No

What is the charge for the data transfer incurred in replicating data between your primary and standby? A. No charge. It is free. B. Double the standard data transfer charge C. Same as the standard data transfer charge D. Half of the standard data transfer charge

A. No charge. It is free.

When using consolidated billing there are two account types. What are they? A. Paying account and Linked account B. Parent account and Child account C. Main account and Sub account. D. Main account and Secondary account.

A. Paying account and Linked account

How can software determine the public and private IP addresses of the EC2 instance that it is running on? A. Query the local instance metadata. B. Query the local instance userdata. C. Query the appropriate Amazon CloudWatch metric. D. Use an ipconfig or ifconfig command.

A. Query the local instance metadata.

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this? A. Remove public read access and use signed URLs with expiry dates. B. Use CloudFront distributions for static content. C. Block the IPs of the offending websites in Security Groups. D. Store photos on an EBS volume of the web server.

A. Remove public read access and use signed URLs with expiry dates.

A user has configured ELB with three instances. The user wants to achieve High Availability as well as redundancy with ELB. Which of the below mentioned AWS services helps the user achieve this for ELB? A. Route 53 B. AWS Mechanical Turk C. Auto Scaling D. AWS EMR

A. Route 53

You need to create a simple, holistic check for your system's general availability and uptime. Your system presents itself as an HTTP-speaking API. What is the simplest tool on AWS to achieve this with? A. Route53 Health Checks B. CloudWatch Health Checks C. AWS ELB Health Checks D. EC2 Health Checks

A. Route53 Health Checks

You have a periodic Image analysis application that gets some files In Input analyzes them and tor each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day. Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process. What services could be used to reduce the elaboration time and improve the availability of the solution? A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue B. EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications C. S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications D. EBS with Provisioned IOPS (PIOPS) to store I/O files SOS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.

A. S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue

The Amazon EC2 web service can be accessed using the _____ web services messaging protocol. This interface is described by a Web Services Description Language (WSDL) document. A. SOAP B. DCOM C. CORBA D. XML-RPC

A. SOAP

Because of the extensibility limitations of striped storage attached to Windows Server, Amazon RDS does not currently support increasing storage on a _____ DB Instance. A. SQL Server B. MySQL C. Oracle

A. SQL Server

You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement? A. Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 72.34.51.100/32 B. Security Group Inbound Rule: Protocol - UDP, Port Range - 22, Source 72.34.51.100/32 C. Network ACL Inbound Rule: Protocol - UDP, Port Range - 22, Source 72.34.51.100/32 D. Network ACL Inbound Rule: Protocol - TCP, Port Range-22, Source 72.34.51.100/0

A. Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 72.34.51.100/32 /0 is not correct CIDR notation for a single IP address UDP is not the correct protocol

In AWS, which security aspects are the customer's responsibility? (Choose four.) A. Security Group and ACL (Access Control List) settings B. Decommissioning storage devices C. Patch management on the EC2 instance's operating system D. Life-cycle management of IAM credentials E. Controlling physical access to compute resources F. Encryption of EBS (Elastic Block Storage) volumes

A. Security Group and ACL (Access Control List) settings C. Patch management on the EC2 instance's operating system D. Life-cycle management of IAM credentials F. Encryption of EBS (Elastic Block Storage) volumes

What are the valid methodologies for encrypting data on S3? A. Server Side Encryption (SSE)-S3, SSE-C, SSE-KMS or a client library such as Amazon S3 Encryption Client. B. Server Side Encryption (SSE)-S3, SSE-A, SSE-KMS or a client library such as Amazon S3 Encryption Client. C. Server Side Encryption (SSE)-S3, SSE-C, SSE-SSL or a client library such as Amazon S3 Encryption Client. D. Server Side Encryption (SSE)-S3, SSE-C, SSE-SSL or a server library such as Amazon S3 Encryption Client.

A. Server Side Encryption (SSE)-S3, SSE-C, SSE-KMS or a client library such as Amazon S3 Encryption Client.

Which features can be used to restrict access to data in S3? (Choose two.) A. Set an S3 ACL on the bucket or the object. B. Create a CloudFront distribution for the bucket. C. Set an S3 bucket policy. D. Enable IAM Identity Federation E. Use S3 Virtual Hosting

A. Set an S3 ACL on the bucket or the object. C. Set an S3 bucket policy.

You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? (Choose two.) A. Set permissions on the object to public read during upload. B. Configure the bucket ACL to set all objects to public read. C. Configure the bucket policy to set all objects to public read. D. Use AWS Identity and Access Management roles to set the bucket to public read. E. Amazon S3 objects default to public read, so no action is needed.

A. Set permissions on the object to public read during upload. C. Configure the bucket policy to set all objects to public read.

Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose three.) A. Setting up a federation proxy or identity provider B. Using AWS Security Token Service to generate temporary tokens C. Tagging each folder in the bucket D. Configuring IAM role E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket

A. Setting up a federation proxy or identity provider B. Using AWS Security Token Service to generate temporary tokens D. Configuring IAM role

_____ embodies the "share-nothing" architecture and essentially involves breaking a large database into several smaller databases. A. Sharding B. Failure recovery C. Federation D. DDL operations

A. Sharding

How many relational database engines does RDS currently support? A. Six: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB B. Just two: MySQL and Oracle. C. Five: MySQL, PostgreSQL, MongoDB, Cassandra and SQLite. D. Just one: MySQL.

A. Six: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB

You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet your requirements? A. Spot Instances B. Reserved instances C. Dedicated instances D. On-Demand instances

A. Spot Instances

What does the command 'ec2-run-instances ami-e3a5408a -n 20 -g appserver' do? A. Start twenty instances as members of appserver group. B. Creates 20 rules in the security group named appserver C. Terminate twenty instances as members of appserver group. D. Start 20 security groups

A. Start twenty instances as members of appserver group.

A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements? A. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDSwith read replicas. B. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas. C. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch. And multi-AZ RDS. D. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS.

A. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDSwith read replicas.

Automated backups are enabled by default for a new DB Instance. A. TRUE B. FALSE

A. TRUE

The new DB Instance that is created when you promote a Read Replica retains the backup window period. A. TRUE B. FALSE

A. TRUE

When using IAM to control access to your RDS resources, the key names that can be used are case sensitive. For example, aws:CurrentTime is NOT equivalent to AWS:currenttime. A. TRUE B. FALSE

A. TRUE

An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure? A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes. B. Use synchronous database master-slave replication between two availability zones. C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes. D. Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.

A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.

Please select the most correct answer regarding the persistence of the Amazon Instance Store: A. The data on an instance store volume persists only during the life of the associated Amazon EC2 instance B. The data on an instance store volume is lost when the security group rule of the associated instance is changed. C. The data on an instance store volume persists even after associated Amazon EC2 instance is deleted

A. The data on an instance store volume persists only during the life of the associated Amazon EC2 instance

When trying to grant an amazon account access to S3 using access control lists what method of identification should you use to identify that account with? A. The email address of the account or the canonical user ID B. The AWS account number C. The ARN D. An email address with a 2FA token Submit

A. The email address of the account or the canonical user ID

You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an Internet gateway is attached to the VPC, and the public route table is configured to send all Internet-based traffic to the Internet gateway. The instance security group is set to allow all outbound traffic but cannot access the internet. Why is the Internet unreachable from this instance? A. The instance does not have a public IP address. B. The internet gateway security group must allow all outbound traffic. C. The instance security group must allow all inbound traffic. D. The instance "Source/Destination check" property must be enabled.

A. The instance does not have a public IP address.

You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer's DNS name. Which options are probable causes of this behavior? (Choose two.) A. The load balancer was not configured to use a public subnet with an Internet gateway configured B. The Amazon EC2 instances do not have a dynamically allocated private IP address C. The security groups or network ACLs are not property configured for web traffic. D. The load balancer is not configured in a private subnet with a NAT instance.E. The VPC does not have a VGW configured.

A. The load balancer was not configured to use a public subnet with an Internet gateway configured C. The security groups or network ACLs are not property configured for web traffic.

If I have multiple Read Replicas for my master DB Instance and I promote one of them, what happens to the rest of the Read Replicas? A. The remaining Read Replicas will still replicate from the older master DB Instance B. The remaining Read Replicas will be deleted C. The remaining Read Replicas will be combined to one read replica

A. The remaining Read Replicas will still replicate from the older master DB Instance

Amazon S3 buckets in all other regions (other than US Standard) provide read-after-write consistency for PUTS of new objects. A. True B. False

A. True

If I modify a DB Instance or the DB parameter group associated with the instance, I should reboot the instance for the changes to take effect? A. True B. False

A. True

It is possible to transfer a reserved instance from one Availability Zone to another. A. True B. False

A. True

Multi-AZ deployment is supported for Microsoft SQL Server DB Instances. A. True B. False

A. True

Reserved Instances are available for Multi-AZ Deployments. A. True B. False

A. True

SQL Server stores logins and passwords in the master database. A. True B. False

A. True

Using Amazon IAM, I can give permissions based on organizational groups? A. True B. False

A. True

Using SAML (Security Assertion Markup Language 2.0) you can give your federated users single sign-on (SSO) access to the AWS Management Console. A. True B. False

A. True

When creating an RDS instance you can select which availability zone in which to deploy your instance. A. True B. False

A. True

When you create new subnets within a custom VPC, by default they can communicate with each other, across availability zones. A. True B. False

A. True

You can add multiple volumes to an EC2 instance and then create your own RAID 5/RAID 10/RAID 0 configurations using those volumes. A. True B. False

A. True

You are deploying an application on EC2 that must call AWS APIs. What method of securely passing credentials to the application should you use? A. Use AWS Identity and Access Management roles for EC2 instances. B. Pass API credentials to the instance using instance userdata. C. Embed the API credentials into your JAR files. D. Store API credentials as an object in Amazon Simple Storage Service.

A. Use AWS Identity and Access Management roles for EC2 instances.

To serve Web traffic for a popular product your chief financial officer and IT director have purchased 10 ml large heavy utilization Reserved Instances (RIs) evenly spread across two availability zones: Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity As a result, your company purchases two C3.2xlarge medium utilization Ris You register the two c3 2xlarge instances with your ELB and quickly find that the ml large instances are at 100% of capacity and the c3 2xlarge instances have significant capacity that's unused Which option is the most cost effective and uses EC2 capacity most effectively? A. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin B. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more on-demand mi large instances when triggered by Cloudwatch shut off c3 2xiarge instances C. Route traffic to EC2 ml large and c3 2xlarge instances directly using Route 53 latency based routing and health checks shut off ELB D. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscailng group for up to two additional c3.2xlarge instances Shut on mi .large instances.

A. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin

You are using an m1.small EC2 Instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Choose 2 answers A. Use an array of EBS volumes. B. Enable Multi-AZ mode. C. Place the instance in an Auto Scaling Groups D. Add an EBS volume and place into RAID 5. E. Increase the size of the EC2 Instance. F. Put the database behind an Elastic Load Balancer.

A. Use an array of EBS volumes. E. Increase the size of the EC2 Instance.

A______ is an individual, system, or application that interacts with AWS programmatically. A. User B. AWS Account C. Group D. Role

A. User

You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this? A. User data B. EC2Config service C. IAM roles D. AWS Config

A. User data keywords, "pass a custom script"

You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: "Network error: Connection timed out" or "Error connecting to [instance], reason: -> Connection timed out: connect," You have confirmed that the network and security group rules are configured correctly and the instance is passing status checks. What steps should you take to identify the source of the behavior? Choose 2 answers A. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch. B. Verify that your IAM user policy has permission to launch Amazon EC2 instances. C. Verify that you are connecting with the appropriate user name for your AMI. D. Verify that the Amazon EC2 Instance was launched with the proper IAM role. E. Verify that your federation trust to AWS has been established.

A. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch. C. Verify that you are connecting with the appropriate user name for your AMI. Reason - using SSH, Network already checked

What does Amazon EC2 provide? A. Virtual servers in the Cloud. B. A platform to run code (Java, PHP, Python), paying on an hourly basis. C. Computer Clusters in the Cloud. D. Physical servers, remotely managed by the customer.

A. Virtual servers in the Cloud.

A group can contain many users. Can a user belong to multiple groups? A. Yes B. No C. Only if they are using two factor authentication D. Only in VPC

A. Yes

Can I initiate a "forced failover" for my Oracle Multi-AZ DB Instance deployment? A. Yes B. Only in certain regions C. Only in VPC D. No

A. Yes

Does Route 53 support MX Records? A. Yes B. It supports CNAME records, but not MX records. C. No D. Only Primary MX records. Secondary MX records are not supported.

A. Yes

Is the encryption of connections between my application and my DB Instance using SSL for the MySQL server engines available? A. Yes B. Only in VPC C. Only in certain regions D. No

A. Yes

Can I attach more than one policy to a particular entity? A. Yes always B. Only if within GovCloud C. No D. Only if within VPC

A. Yes always

Can you create IAM security credentials for existing users? A. Yes, existing users can have security credentials associated with their account. B. No, IAM requires that all users who have credentials set up are not existing users C. No, security credentials are created within GROUPS, and then users are associated to GROUPS at a later time. D. Yes, but only IAM credentials, not ordinary security credentials.

A. Yes, existing users can have security credentials associated with their account.

Are you able to integrate a multi-factor token service with the AWS Platform? A. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform. B. No, you cannot integrate multi-factor token devices with the AWS platform. C. Yes, you can integrate private multi-factor token devices to authenticate users to the AWS platform.

A. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

Does AWS allow for the use of Multi Factor Authentication tokens? A. Yes, with both hardware or virtual MFA devices B. Yes, but only virtual MFA devices. C. Yes, but only physical (hardware) MFA devices. D. No

A. Yes, with both hardware or virtual MFA devices

You are implementing a URL white-listing system for a company that wants to restrict outbound HTTPS connections to specific domains from their EC2-hosted applications you deploy a single EC2 instance running proxy software and configure It to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration You have a nightly maintenance window or 10 minutes where ail instances fetch new software updates. Each update Is about 200MB In size and there are 500 instances In the VPC that routinely fetch updates. After a few days you notice that some machines are failing to successfully download some, but not all of their updates within the maintenance window. The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances. What might be happening? (Choose two.) A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time. B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance. C. The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy. D. You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail. E. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW).

A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time. B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance.

After creating a new AWS account, you use the API to request 40 on-demand EC2 instances in a single AZ. After 20 successful requests, subsequent requests failed. What could be a reason for this issue, and how would you resolve it? A. You encountered a soft limit of 20 instances per region. Submit the limit increase form and retry the failed requests once approved. B. AWS allows you to provision no more than 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request. C. You need to use Amazon Virtual Private Cloud (VPC) in order to provision more than 20 instances in a single Availability Zone. Simply terminate the resources already provisioned and re-launch them all in a VPC. D. You encountered an API throttling situation and should try the failed requests using an exponential decay retry algorithm.

A. You encountered a soft limit of 20 instances per region. Submit the limit increase form and retry the failed requests once approved.

My Read Replica appears "stuck" after a Multi-AZ failover and is unable to obtain or apply updates from the source DB Instance. What do I do? A. You will need to delete the Read Replica and create a new one to replace it. B. You will need to disassociate the DB Engine and re associate it. C. The instance should be deployed to Single AZ and then moved to Multi- AZ once again D. You will need to delete the DB Instance and create a new one to replace it.

A. You will need to delete the Read Replica and create a new one to replace it.

The SQL Server _____ feature is an efficient means of copying data from a source database to your DB Instance. It writes the data that you specify to a data file, such as an ASCII file. A. bulk copy B. group copy C. dual copy D. mass copy

A. bulk copy

If I want my instance to run on a single-tenant hardware, which value do I have to set the instance's tenancy attribute to? A. dedicated B. isolated C. one D. reserved

A. dedicated

You are hosting a website in Ireland called aloud.guru and you decide to have a static DR site available on S3 in the event that your primary site would go down. Your bucket name is also called "acloudguru". What would be the S3 URL of the static website? A. https://acloudguru.s3-website-eu-west-1.amazonaws.com B. https://s3-eu-east-1.amazonaws.com/acloudguru C. https://acloudguru.s3-website-us-east-1.amazonaws.com D. https://s3-eu-central-1.amazonaws.com/acloudguru

A. https://acloudguru.s3-website-eu-west-1.amazonaws.com

A _____ is a document that provides a formal statement of one or more permissions. A. policy B. permission C. Role D. resource

A. policy

A __________ is a document that provides a formal statement of one or more permissions. A. policy B. permission C. Role D. resource

A. policy

In regards to IAM you can edit user properties later, but you cannot use the console to change the _____. A. user name B. password C. default group

A. user name

In regards to IAM you can edit user properties later, but you cannot use the console to change the ___________. A. user name B. password C. default group

A. user name

*What is the best way to manage RESTful APIs?* * API Gateway * EC2 servers * Lambda * AWS Batch

API Gateway * Theoretically EC2 servers can be used for managing the APIs, but if you can do it easily through API Gateway, why would you ever consider EC2 servers? Lambda and Batch are used for executing the code.

*In the past, someone made some changes to your security group, and as a result an instance is not accessible by the users for some time. This resulted in nasty downtime for the application. You are looking to find out what change has been made in the system, and you want to track it. Which AWS service are you going to use for this?* * AWS Config * Amazon CloudWatch * AWS CloudTrail * AWS Trusted Advisor

AWS Config * AWS Config maintains the configuration of the system and helps you identify what change was made in it.

*You are a developer and want to deploy your application in AWS. You don't have an infrastructure background and you are not sure about how to use infrastructure within AWS. You are looking for deploying your application in such a way that the insfrastructure scales on its own, and at the same time you don't have to deal with managing it. Which AWS service are you going to choose for this?* * AWS Config * AWS Lambda * AWS Elastic Beanstalk * Amazon EC2 servers and Auto Scaling

AWS Elastic Beanstalk * AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.

You are working with a customer who is using Chef configuration management in their data center. What service is designed to let the customer leverage existing Chef recipes in AWS?

AWS OpsWorks

*Your company has more than 20 business units, and each business unit has its own account in AWS. Which AWS service would you choose to manage the billing across all the different AWS accounts?* * AWS Organizations * AWS Trusted Advisor * AWS Cost Advisor * AWS Billing Console

AWS Organizations * Using AWS Organizations, you can manage the billing from various AWS accounts.

*You are designing an e-commerce order management web site where your users can order different types of goods. You want to decouple the architecture and would like to separate the ordering process from shipping. Depending on the shipping priority, you want to separate queue running for standard shipping versus priority shipping. Which AWS service would you consider for this?* * AWS CloudWatch * AWS CloudWatch Events * AWS API Gateway * AWS SQS

AWS SQS * Using SQS, you can decouple the ordering and shipping processes, and you can create separate queues for the ordering and shipping processes.

*What is the AWS service you are going to use to monitor the service limit of your EC2 instance?* * EC2 dashboard * AWS Trusted Advisor * AWS CloudWatch * AWS Config

AWS Trusted Advisor * Using Trusted Advisor, you can monitor the service limits for the EC2 instance.

*You are running a job in an EMR cluster, and the job is running for a long period of time. You want to add additional horsepower to your cluster, and at the same time you want make sure it is cost effective. What is the best way of solving this problem?* * Add more on-demand EC2 instances for your task node * Add more on-demand EC2 isntances for you code node * Add more spot instances for your task node * Add more reserved instances for your task node

Add more spot instaces for your task node * You can add more spot instances to your task node to finish the job early. Spot instances are the cheapest in cost, so this will make sure the solution is cost effective.

*You have created a customer subnet, but you forgot to add a route for Internet connectivity. As a result, all the web servers running in that subnet don't have any Internet access. How will you make sure all the web servers can access the Internet?* * Attach a virtual private gateway to the subnet for destination 0.0.0.0/0 * Attach an Internet gateway to the subnet for destination 0.0.0.0/0 * Attach an Internet gateway to the security group of EC2 instances for the destination 0.0.0.0/0 * Attached a VPC endpoint to the subnet

Attach an Internet gateway to the subnet for destination 0.0.0.0/0 You need to attach an Internet gateway so that the subnet can talk with the Internet. * A virtual private gateway is used to create a VPN connection. You cannot attach an Internet gateway to an EC2 instance. It has to be at the subnet level. A VPC endpoint is used so S3 or DynamoDB can communicate with Amazon VPC, bypassing the Internet.

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main webapplication best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week. Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way? A. Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe B. Create one AWS OpsWorks stack create two AWS Ops Works layers create one custom recipe C. Create two AWS OpsWorks stacks create two AWS Ops Works layers create one custom recipe D. Create two AWS OpsWorks stacks create two AWS Ops Works layers create two custom recipe

B or C (Mostly C because "In the last words "in the most cost-efficient and flexible way?" So "C" is the right answer. "B" are for real world with best practices, but not for the exam.")

In the 'Detailed' monitoring data available for your Amazon EBS volumes, Provisioned IOPS volumes automatically send _____ minute metrics to Amazon CloudWatch. A. 3 B. 1 C. 5 D. 2

B. 1

What is the maximum response time for a Business level Premium Support case? A. 120 seconds B. 1 hour C. 10 minutes D. 12 hours

B. 1 hour

Which procedure for backing up a relational database on EC2 that is using a set of RAlDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup? A. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes C. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk I/O D. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk I/O E. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O

B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes 'D' won't work because the volume is RAIDed. Instance must be fully stopped and cache flushed.

You can modify the backup retention period for AWS RDS. Valid values are 0 (for no backup retention) to a maximum of _____ days. A. 45 B. 35 C. 15 D. 5

B. 35

You can modify the backup retention period for RDS; valid values are 0 (for no backup retention) to a maximum of ___________ days. A. 45 B. 35 C. 15 D. 5

B. 35

What is a placement group? A. A collection of Auto Scaling groups in the same region B. A feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections C. A collection of authorized CloudFront edge locations for a distribution D. A collection of Elastic Load Balancers in the same Region or Availability Zone

B. A feature that enables EC2 instances to interact with each other via high bandwidth, low latency connections

What is Oracle SQL Developer? A. An AWS developer who is an expert in Amazon RDS using both the Oracle and SQL Server DB engines B. A graphical Java tool distributed without cost by Oracle. C. It is a variant of the SQL Server Management Studio designed by Microsoft to support Oracle DBMS functionalities D. A different DBMS released by Microsoft free of cost

B. A graphical Java tool distributed without cost by Oracle.

Which service enables AWS customers to manage users and permissions in AWS? A. AWS Access Control Service (ACS) B. AWS Identity and Access Management (IAM) C. AWS Identity Manager (AIM) D. AWS Security Groups

B. AWS Identity and Access Management (IAM)

You are running a news website in the eu-west-1 region that updates every 15 minutes. The website has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database Static content resides on Amazon S3, and is distributed through Amazon CloudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an Amazon RDS extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%. While freeable memory is in the 2 GB range. Web analytics reports show that the average load time of your web pages is around 1.5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you improve page load times for your users? (Choose three.) A. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. B. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries C. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site D. Switch Amazon RDS database to the high memory extra large Instance type E. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region.

B. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries C. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site E. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region.

Which of the following will occur when an EC2 instance in a VPC with an associated Elastic IP is stopped and started? (Choose 2 answers) A. The Elastic IP will be dissociated from the instance B. All data on instance-store devices will be lost C. All data on EBS (Elastic Block Store) devices will be lost D. The ENI (Elastic Network Interface) is detached E. The underlying host for the instance is changed

B. All data on instance-store devices will be lost E. The underlying host for the instance is changed

Which of the following will occur when an EC2 instance in a VPC with an associated Elastic IP is stopped and started? (Choose 2 answers) A. The Elastic IP will be dissociated from the instance B. All data on instance-store devices will be lost C. All data on EBS (Elastic Block Store) devices will be lost D. The ENI (Elastic Network Interface) is detached E. The underlying host for the instance is changed

B. All data on instance-store devices will be lost E. The underlying host for the instance is changed

A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? (Choose two.) A. Amazon Simple Email Service B. Amazon CloudWatch C. Amazon Simple Queue Service D. Amazon Route 53 E. Amazon Simple Notification Service

B. Amazon CloudWatch E. Amazon Simple Notification Service

If I want to run a database in an Amazon instance, which is the most recommended Amazon storage option? A. Amazon Instance Storage B. Amazon EBS C. You can't run a database inside an Amazon instance. D. Amazon S3

B. Amazon EBS

_____ is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance. A. Amazon S3 B. Amazon EBS C. Amazon EFS D. All of these

B. Amazon EBS

When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 A. Amazon DynamoDB B. Amazon Elastic Compute Cloud (EC2) C. Amazon Elastic Load Balancing D. Amazon Simple Notification Service (SNS) E. Amazon Simple Storage Service (S3)

B. Amazon Elastic Compute Cloud (EC2) C. Amazon Elastic Load Balancing

Which services allow the customer to retain full administrative privileges of the underlying EC2 instances? (Choose two.) A. Amazon Relational Database Service B. Amazon Elastic Map Reduce C. Amazon ElastiCache D. Amazon DynamoDB E. AWS Elastic Beanstalk

B. Amazon Elastic Map Reduce E. AWS Elastic Beanstalk

A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company's requirements? A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone B. Amazon RDS for MySQL with Multi-AZ C. Amazon ElastiCache D. Amazon DynamoDB

B. Amazon RDS for MySQL with Multi-AZ

You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers A. Amazon CloudWatch B. Amazon Relational Database Service (RDS) C. Elastic Load Balancing D. Amazon ElastiCache E. AWS Storage Gateway F. Amazon DynamoDB

B. Amazon Relational Database Service (RDS) D. Amazon ElastiCache F. Amazon DynamoDB

Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use? A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. C. Amazon ElastiCache to store the writes until the writes are committed to the database. D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.

B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.

What happens when you create a topic on Amazon SNS? A. The topic is created, and it has the name you specified for it. B. An ARN (Amazon Resource Name) is created. C. You can create a topic on Amazon SQS, not on Amazon SNS. D. This question doesn't make sense.

B. An ARN (Amazon Resource Name) is created.

What does Amazon Elastic Beanstalk provide? A. A scalable storage appliance on top of Amazon Web Services. B. An application container on top of Amazon Web Services. C. A service by this name doesn't exist. D. A scalable cluster of EC2 instances.

B. An application container on top of Amazon Web Services.

You have developed a new web application in us-west-2 that requires six Amazon Elastic Compute Cloud (EC2) instances running at all times. You have three availability zones available in that region (us-west-2a, us-west-2b, and us-west-2c). You need 100 percent fault tolerance if any single Availability Zone in us-west-2 becomes unavailable. How would you do this, each answer has 2 answers, select the answer with BOTH correct answers. A. Answer 1 - Us-west-2a with two EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances. Answer 2 - Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances B. Answer 1 - Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. Answer 2 - Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. C. Answer 1 - Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with no EC2 instances. Answer 2 - Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. D. Answer 1 - Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances. Answer 2 - Us-west-2a with four EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances.

B. Answer 1 - Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances. Answer 2 - Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.

You work for a famous bakery who are deploying a hybrid cloud approach. Their legacy IBM AS400 servers will remain on premise within their own datacenter however they will need to be able to communicate to the AWS environment over a site to site VPN connection. What do you need to do to establish the VPN connection? A. Connect to the environment using AWS Direct Connect. B. Assign a public IP address to your Amazon VPC Gateway. C. Create a dedicated NAT and deploy this to the public subnet. D. Update your route table to add a route for the NAT to 0.0.0.0/0.

B. Assign a public IP address to your Amazon VPC Gateway.

You work for a famous bakery who are deploying a hybrid cloud approach. Their legacy IBM AS400 servers will remain on premise within their own datacenter however they will need to be able to communicate to the AWS environment over a site to site VPN connection. What do you need to do to establish the VPN connection? A. Connect to the environment using AWS Direct Connect. B. Assign a public IP address to your Amazon VPG (Virtual Private Gateway). C. Create a dedicated NAT and deploy this to the public subnet. D. Update your route table to add a route for the NAT to 0.0.0.0/0.

B. Assign a public IP address to your Amazon VPG (Virtual Private Gateway).

A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure mat AWS credentials (i.e., Access Key ID/Secret Access Key combination) are not compromised? A. Enable Multi-Factor Authentication for your AWS root account. B. Assign an IAM role to the Amazon EC2 instance. C. Store the AWS Access Key ID/Secret Access Key combination in software comments. D. Assign an IAM user to the Amazon EC2 Instance.

B. Assign an IAM role to the Amazon EC2 instance.

You work for a construction company that has their production environment in AWS. The production environment consists of 3 identical web servers that are launched from a standard Amazon linux AMI using Auto Scaling. The web servers are launched in to the same public subnet and belong to the same security group. They also sit behind the same ELB. You decide to do some test and dev and you launch a 4th EC2 instance in to the same subnet and same security group. Annoyingly your 4th instance does not appear to have internet connectivity. What could be the cause of this? A. You need to update your routing table so as to provide a route out for this instance. B. Assign an elastic IP address to the fourth instance. C. You have not configured a NAT in the public subnet. D. You have not configured a routable IP address in the host OS of the fourth instance.

B. Assign an elastic IP address to the fourth instance.

A company needs to deploy services to an AWS region which they have not previously used. The company currently has an AWS identity and Access Management (IAM) role for the Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have the same privileges. How should the company achieve this? A. Create a new IAM role and associated policies within the new region B. Assign the existing IAM role to the Amazon EC2 instances in the new region C. Copy the IAM role and associated policies to the new region and attach it to the instances D. Create an Amazon Machine Image (AMI) of the instance and copy it to the desired region using the AMI Copy feature

B. Assign the existing IAM role to the Amazon EC2 instances in the new region

What are the two types of licensing options available for using Amazon RDS for Oracle? A. BYOL and Enterprise License B. BYOL and License Included C. Enterprise License and License Included D. Role based License and License Included

B. BYOL and License Included

How can I change the security group membership for interfaces owned by other AWS services, such as Elastic Load Balancing? A. using all these methods B. By using the service specific console or API\CLI commands C. None of these

B. By using the service specific console or API\CLI commands

How can I change the security group membership for interfaces owned by other AWS services, such as Elastic Load Balancing? A. using all these methods B. By using the service specific console or API\CLI commands C. None of these D. Using Amazon EC2 API/CLI

B. By using the service specific console or API\CLI commands

If you have chosen Multi-AZ deployment, in the event of an outage of your primary DB Instance, Amazon RDS automatically switches to the standby replica. The automatic failover mechanism simply changes the ______ record of the main DB Instance to point to the standby DB Instance. A. DNAME B. CNAME C. TXT D. MX

B. CNAME

A customer's nightly EMR job processes a single 2-TB data file stored on Amazon Simple Storage Service (S3). The EMR job runs on two On-Demand core nodes and three On-Demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers A. Use three Spot Instances rather than three On-Demand instances for the task nodes. B. Change the input split size in the MapReduce job configuration. C. Use a bootstrap action to present the S3 bucket as a local filesystem. D. Launch the core nodes and task nodes within an Amazon Virtual Cloud. E. Adjust the number of simultaneous mapper tasks. F. Enable termination protection for the job flow.

B. Change the input split size in the MapReduce job configuration. E. Adjust the number of simultaneous mapper tasks.

What are the Amazon EC2 API tools? A. They don't exist. The Amazon EC2 AMI tools, instead, are used to manage permissions. B. Command-line tools to the Amazon EC2 web service. C. They are a set of graphical tools to manage EC2 instances. D. They don't exist. The Amazon API tools are a client interface to Amazon Web Services.

B. Command-line tools to the Amazon EC2 web service.

An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume? A. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Re-mount the Amazon EBS volume. B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the oldAmazon EBS volume. C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume. D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume

B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the oldAmazon EBS volume.

You run a website which hosts videos and you have two types of members, premium fee paying members and free members. All videos uploaded by both your premium members and free members are processed by a fleet of EC2 instances which will poll SQS as videos are uploaded. However you need to ensure that your premium fee paying members videos have a higher priority than your free members. How do you design SQS? A. SQS allows you to set priorities on individual items within the queue, so simply set the fee paying members at a higher priority than your free members. B. Create two SQS queues, one for premium members and one for free members. Program your EC2 fleet to poll the premium queue first and if empty, to then poll your free members SQS queue. C. SQS would not be suitable for this scenario. It would be much better to use SNS to encode the videos. Submit

B. Create two SQS queues, one for premium members and one for free members. Program your EC2 fleet to poll the premium queue first and if empty, to then poll your free members SQS queue.

What does ec2-create-group do with respect to the Amazon EC2 security groups? A. Creates a new rule inside the security group. B. Creates a new security group for use with your account. C. Creates a new group inside the security group. D. Groups the user created security groups in to a new group for easy access.

B. Creates a new security group for use with your account.

What does the ec2-create-group command do with respect to the Amazon EC2 security groups? A. Groups the user created security groups in to a new group for easy access. B. Creates a new security group for use with your account. C. Creates a new group inside the security group. D. Creates a new rule inside the security group.

B. Creates a new security group for use with your account.

What happens to the data on an instance if the instance reboots (intentionally or unintentionally)? A. Data will be lost B. Data persists C. Data may persist however cannot be sure

B. Data persists If stops, Instance store will lose data.

Which is an operational process performed by AWS for data security? A. AES-256 encryption of data stored on any shared storage device B. Decommissioning of storage devices using industry-standard practices C. Background virus scans of EBS volumes and EBS snapshots D. Replication of data across multiple AWS Regions E. Secure wiping of EBS data when an EBS volume is unmounted

B. Decommissioning of storage devices using industry-standard practices

IAM's Policy Evaluation Logic always starts with a default ____________ for every request, except for those that use the AWS account's root security credentials. A. Permit B. Deny C. Cancel

B. Deny

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security. A. Save the API credentials to your php files. B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. C. Save your API credentials in a public Github repository. D. Pass API credentials to the instance using instance userdata.

B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it.

Which of the following are characteristics of Amazon VPC subnets? (Choose two.) A. Each subnet spans at least 2 Availability Zones to provide a high-availability environment. B. Each subnet maps to a single Availability Zone. C. CIDR block mask of/25 is the smallest range supported. D. By default, all subnets can route between each other, whether they are private or public. E. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.

B. Each subnet maps to a single Availability Zone. D. By default, all subnets can route between each other, whether they are private or public.

Which of the following services allows you root access (i.e. you can login using SSH)? A. Elastic Load Balancer B. Elastic Map Reduce C. Elasticache D. RDS

B. Elastic Map Reduce

A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? (Choose two.) A. Use AWS Consolidated Billing and disable AWS root account access for the child accounts. B. Enable IAM cross-account access for all corporate IT administrators in each child account. C. Create separate VPCs for each division within the corporate IT AWS account. D. Use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account. E. Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account's Amazon S3 'Log' bucket.

B. Enable IAM cross-account access for all corporate IT administrators in each child account. D. Use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account.

You are a solutions architect working for a biotech company who is pioneering research in immunotherapy. They have developed a new cancer treatment that may be able to cure up to 94% of cancers. They store their research data on S3, however recently an intern accidentally deleted some critical files. You've been asked to prevent this from happening in the future. What options below can prevent this? A. Make sure the interns can only access data on S3 using signed URLs. B. Enable S3 versioning on the bucket & enable Enable Multifactor Authentication (MFA) on the bucket. C. Use S3 Infrequently Accessed storage to store the data on. D. Create an IAM bucket policy that disables deletes. Submit

B. Enable S3 versioning on the bucket & enable Enable Multifactor Authentication (MFA) on the bucket.

A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements? A. Enable AWS CloudTrail for the load balancer. B. Enable access logs on the load balancer. C. Install the Amazon CloudWatch Logs agent on the load balancer. D. Enable Amazon CloudWatch metrics on the load balancer.

B. Enable access logs on the load balancer.

You have multiple Amazon EC2 instances running in a cluster across multiple Availability Zones within the same region. What combination of the following should be used to ensure the highest network performance (packets per second), lowest latency, and lowest jitter? (Choose three.) A. Amazon EC2 placement groups B. Enhanced networking C. Amazon PV AMI D. Amazon HVM AMI E. Amazon Linux F. Amazon VPC

B. Enhanced networking D. Amazon HVM AMI F. Amazon VPC

Amazon Web Services offer 3 different levels of support, which of the below are valid support levels. A. Corporate, Business, Developer B. Enterprise, Business, Developer C. Enterprise, Business, Free Tier D. Enterprise, Company, Free Tier

B. Enterprise, Business, Developer

Typically, you want your application to check whether a request generated an error before you spend any time processing results. The easiest way to find out if an error occurred is to look for an ______ node in the response from the Amazon RDS API. A. Incorrect B. Error C. FALSE

B. Error

A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? (Choose two.) A. Establish a hardware VPN over the internet between VPC-2 and the on-premises network. B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2. D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

True or False: Manually created DB Snapshots are deleted after the DB Instance is deleted. A. TRUE B. FALSE

B. FALSE

Amazon S3 buckets in all other regions (other than US Standard) do not provide eventual consistency for overwrite PUTS and DELETES. A. True B. False

B. False

Amazon S3 buckets in the US Standard region do not provide eventual consistency. A. True B. False

B. False

New database versions will automatically be applied to AWS RDS instances as they become available. A. True B. False

B. False

Placement Groups can be created across 2 or more Availability Zones. A. True B. False

B. False

You can select a specific Availability Zone in which to place your DynamoDB Table A. True B. False

B. False

In Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space? A. FreeStorage B. FreeStorageSpace C. FreeStorageVolume D. FreeDBStorageSpace

B. FreeStorageSpace

Which of the following cannot be used in EC2 to control who has access to specific EC2 instances? A. Security Groups B. IAM System C. SSH keys D. Windows passwords

B. IAM System

When should I choose Provisioned IOPS over Standard RDS storage? A. If you have batch-oriented workloads B. If you use production online transaction processing (OLTP) workloads. C. If you have workloads that are not sensitive to consistent performance D. If you infrequently read or write to the drive.

B. If you use production online transaction processing (OLTP) workloads.

You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP however when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect? A. Straight away but to the new instances only. B. Immediately. C. After a few minutes this should take effect. D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.

B. Immediately.

Amazon RDS automated backups and DB Snapshots are currently supported for only the ______ storage engine. A. MyISAM B. InnoDB

B. InnoDB

For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? (Choose two.) A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors B. Managing a multi-step and multi-decision checkout process of an e-commerce website C. Orchestrating the execution of distributed and auditable business processes D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs E. Using as a distributed session store for your web application

B. Managing a multi-step and multi-decision checkout process of an e-commerce website C. Orchestrating the execution of distributed and auditable business processes

Which of the following are use cases for Amazon DynamoDB? (Choose three) A. Storing BLOB data. B. Managing web sessions. C. Storing JSON documents. D. Storing metadata for Amazon S3 objects. E. Running relational joins and complex updates. F. Storing large amounts of infrequently accessed data.

B. Managing web sessions. C. Storing JSON documents. D. Storing metadata for Amazon S3 objects.

You are a systems administrator and you need to monitor the health of your production environment. You decide to do this using Cloud Watch, however you notice that you cannot see the health of every important metric in the default dash board. Which of the following metrics do you need to design a custom cloud watch metric for, when monitoring the health of your EC2 instances? A. CPU Usage B. Memory usage C. Disk read operations D. Network in E. Estimated charges

B. Memory usage

Can I detach the primary (eth0) network interface when the instance is running or stopped? A. Yes B. No C. Depends on the state of the interface at the time

B. No

Can we attach an EBS volume to more than one EC2 instance at the same time? A. Yes B. No C. Only EC2-optimized EBS volumes. D. Only in read mode.

B. No

Does Amazon RDS allow direct host access via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection? A. Yes B. No C. Depends on if it is in VPC or not

B. No

If an Amazon EBS volume is the root device of an instance, can I detach it without stopping the instance? A. Yes but only if Windows instance B. No C. Yes D. Yes but only if a Linux instance

B. No

Is Federated Storage Engine currently supported by Amazon RDS for MySQL? A. Only for Oracle RDS instances B. No C. Yes D. Only in VPC

B. No

What is the minimum charge for the data transferred between Amazon RDS and Amazon EC2 Instances in the same Availability Zone? A. USD 0.10 per GB B. No charge. It is free. C. USD 0.02 per GB D. USD 0.01 per GB

B. No charge. It is free.

What are the different types of virtualization available on EC2? A. Pseudo-Virtual (PV) & Hardware Virtual Module (HSM) B. Para-Virtual (PV) & Hardware Virtual Machine (HVM) C. Pseudo-Virtual (PV) & Hardware Virtual Machine (HVM) D. Para-Virtual (PV) & Hardware Virtual Module (HSM) Submit

B. Para-Virtual (PV) & Hardware Virtual Machine (HVM)

You require the ability to analyze a customer's clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data? A. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers C. Write click events directly to Amazon Redshift and then analyze with SQL D. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with SQL

B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers

You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application? A. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app and use them to access Amazon S3. B. Record the user's Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service 'AssumeRole' function Store these credentials in the mobile app's memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app. C. Record the user's Information In Amazon DynamoDB. When the user uses their mobile app create temporary credentials using AWS Security Token Service with appropriate permissions Store these credentials in the mobile app's memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app. D. Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3. E. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to access Amazon S3.

B. Record the user's Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service 'AssumeRole' function Store these credentials in the mobile app's memory and use them to access Amazon S3 Generate new credentials the next time the user runs the mobile app.

What does Amazon RDS stand for? A. Regional Data Server. B. Relational Database Service. C. Nothing. D. Regional Database Service.

B. Relational Database Service.

Select the most correct answer: The device name /dev/sda1 (within Amazon EC2 ) is _____ A. Possible for EBS volumes B. Reserved for the root device C. Recommended for EBS volumes D. Recommended for instance store volumes

B. Reserved for the root device

You have an EC2 instance which needs to find out both its private IP address and its public IP address. To do this you need to; A. Run IPCONFIG (Windows) or IFCONFIG (Linux) B. Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/ C. Retrieve the instance Userdata from http://169.254.169.254/latest/meta-data/ D. Use the following command; AWS EC2 displayIP

B. Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/

You work for a major news network in Europe. They have just released a new app which allows users to report on events as and when they happen using their mobile phone. Users are able to upload pictures from the app and then other users will be able to view these pics. Your organization expects this app to grow very quickly, essentially doubling it's user base every month. The app uses S3 to store the media and you are expecting sudden and large increases in traffic to S3 when a major news event takes place (as people will be uploading content in huge numbers). You need to keep your storage costs to a minimum however and it does not matter if some objects are lost. Which storage media should you use to keep costs as low as possible? A. S3 - Infrequently Accessed Storage. B. S3 - Reduced Redundancy Storage (RRS). C. Glacier. D. S3 - Provisioned IOPS.

B. S3 - Reduced Redundancy Storage (RRS).

Select the correct set of steps for exposing the snapshot only to specific AWS accounts: A. Select public for all the accounts and check mark those accounts with whom you want to expose the snapshots and click save. B. SelectPrivate, enter the IDs of those AWS accounts, and clickSave. C. SelectPublic, enter the IDs of those AWS accounts, and clickSave. D. SelectPublic, mark the IDs of those AWS accounts as private, and clickSave.

B. SelectPrivate, enter the IDs of those AWS accounts, and clickSave.

Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer's requirements? A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics. B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs

B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs

You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements? A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access. C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials. D. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.

B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.

What does Amazon SES stand for? A. Simple Elastic Server. B. Simple Email Service. C. Software Email Solution. D. Software Enabled Server.

B. Simple Email Service.

What does Amazon SWF stand for? A. Simple Web Flow B. Simple Work Flow C. Simple Wireless Forms D. Simple Web Form

B. Simple Work Flow

You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way? A. Reserved instances B. Spot instances C. Dedicated instances D. On-demand instances

B. Spot Instances

You are a solutions architect working for a company that specializes in ingesting large data feeds (using Kinesis) and then analyzing these feeds using Elastic Map Reduce (EMR). The results are then stored on a custom MySQL database which is hosted on an EC2 instance which has 3 volumes, the root/boot volume, and then 2 additional volumes which are striped in to a RAID 1. Your company recently had an outage and lost some key data and have since decided that they will need to run nightly back ups. Your application is only used during office hours, so you can afford to have some down time in the middle of the night if required. You decide to take a snapshot of all three volumes every 24 hours. In what manner should you do this? A. Take a snapshot of each volume independently, while the EC2 instance is running. B. Stop the EC2 instance and take a snapshot of each EC2 instance independently. Once the snapshots are complete, start the EC2 instance and ensure that all relevant volumes are remounted. C. Add two additional volumes to the existing RAID 0 volume and mirror these volumes creating a RAID 10. Take a snap of only the two new volumes. D. Create a read replica of the existing EC2 instance and then take your snapshots from the read replica and not the live EC2 instance.

B. Stop the EC2 instance and take a snapshot of each EC2 instance independently. Once the snapshots are complete, start the EC2 instance and ensure that all relevant volumes are remounted.

Before I delete an EBS volume, what can I do if I want to recreate the volume later? A. Create a copy of the EBS volume (not a snapshot) B. Store a snapshot of the volume C. Download the content to an EC2 instance D. Back up the data in to a physical disk

B. Store a snapshot of the volume

You are designing a site for a new start up which generates cartoon images for people automatically. Customers will log on to the site, upload an image which is stored in S3. The application then passes a job to AWS SQS and a fleet of EC2 instances poll the queue to receive new processing jobs. These EC2 instances will then turn the picture in to a cartoon and will then need to store the processed job somewhere. Users will typically download the image once (immediately), and then never download the image again. What is the most commercially feasible method to store the processed images? A. Rather than use S3, store the images inside a BLOB on RDS with Multi-AZ configured for redundancy. B. Store the images on S3 RRS, and create a lifecycle policy to delete the image after 24 hours. C. Store the images on glacier instead of S3. D. Use elastic block storage volumes to store the images.

B. Store the images on S3 RRS, and create a lifecycle policy to delete the image after 24 hours.

If you add a tag that has the same key as an existing tag on a DB Instance, the new value overwrites the old value. A. FALSE B. TRUE

B. TRUE

True or False: When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint. A. FALSE B. TRUE

B. TRUE

When you add a rule to a DB security group, you do not need to specify port number or protocol. A. Depends on the RDMS used B. TRUE C. FALSE

B. TRUE

When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint. A. FALSE B. TRUE

B. TRUE

When you use the AWS Management Console to delete an IAM user, IAM also deletes any signing certificates and any access keys belonging to the user. A. FALSE B. TRUE

B. TRUE

Without IAM, you cannot control the tasks a particular user or system can do and what AWS resources they might use. A. FALSE B. TRUE

B. TRUE

You are charged for the IOPS and storage whether or not you use them in a given month? A. FALSE B. TRUE

B. TRUE

By default, what happens to ENIs that are automatically created and attached to EC2 instances when the attached instance terminates? A. Remain as is B. Terminate C. Hibernate D. Pause

B. Terminate

A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose two.) A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket. B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. D. The application authenticates against LDAP the application, then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials, the application can use the IAM temporary credentials to access the appropriate S3 bucket. E.The application authenticates against IAM Security Token Service using the LDAP credentials, the application uses those temporary AWS security credentials to access the appropriate S3 bucket.

B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.

An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance's security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance? A. The outbound security group needs to be modified to allow outbound traffic. B. The outbound network ACL needs to be modified to allow outbound traffic. C. Nothing, it can be accessed from any IP address using SSH. D. Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.

B. The outbound network ACL needs to be modified to allow outbound traffic.

You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application? A. Enable enhanced networking B. Use Amazon S3 multipart upload C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency. D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

B. Use Amazon S3 multipart upload

Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements Customers can show off their Individuality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time? A. Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling group of G2 instances in a placement group. B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group. C. Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).

B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group.

A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company's requirements? A. Virtual Private Network connection. AWS Directory Services, and ClassicLink B. Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces C. AWS Directory Service, Amazon Workspaces, and AWS Identity and Access Management D. Amazon Elastic Compute Cloud, and AWS Identity and Access Management

B. Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces

After an Amazon EC2-VPC instance is launched, can I change the VPC security groups it belongs to? A. No B. Yes C. Only if you are the root user D. Only if the tag "VPC_Change_Group" is true

B. Yes

After an EC2-VPC instance is launched, can I change the VPC security groups it belongs to? A. Only if the tag "VPC_Change_Group" is true B. Yes C. No D. Only if the tag "VPC Change Group" is true

B. Yes

Can I encrypt connections between my application and my DB Instance using SSL? A. No B. Yes C. Only in VPC D. Only in certain regions

B. Yes

Do the Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance? A. Only if instructed to when created B. Yes C. No

B. Yes

If I modify a DB Instance or the DB parameter group associated with the instance, should I reboot the instance for the changes to take effect? A. No B. Yes

B. Yes

Will I be charged if the DB instance is idle? A. No B. Yes C. Only is running in GovCloud D. Only if running in VPC

B. Yes

Will my standby RDS instance be in the same Region as my primary? A. Only for Oracle RDS types B. Yes C. Only if configured at launch D. No

B. Yes

Is it possible to access your EBS snapshots? A. Yes, through the Amazon S3 APIs. B. Yes, through the Amazon EC2 APIs. C. No, EBS snapshots cannot be accessed; they can only be used to create a new EBS volume. D. EBS doesn't provide snapshots.

B. Yes, through the Amazon EC2 APIs.

Which statements are true about Amazon Route 53? (Choose 2 answers) A. Amazon Route 53 is a region-level service B. You can register your domain name C. Amazon Route 53 can perform health checks and failovers to a backup site in the even of the primary site failure D. Amazon Route 53 only supports Latency-based routing

B. You can register your domain name C. Amazon Route 53 can perform health checks and failovers to a backup site in the even of the primary site failure

A startup company hired you to help them build a mobile application, that will ultimately store billions of images and videos in S3. The company is lean on funding, and wants to minimize operational costs, however, they have an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business, they are expecting sudden and large increases in traffic to and from S3, and need to ensure that it can handle the performance needs of their application. What other information must you gather from this customer in order to determine whether S3 is the right option? A. You must know how many customers the company has today, because this is critical in understanding what their customer base will be in two years. B. You must find out the total number of requests per second at peak usage. C. You must know the size of the individual objects being written to S3, in order to properly design the key namespace. D. In order to build the key namespace correctly, you must understand the total amount of storage needs for each S3 bucket.

B. You must find out the total number of requests per second at peak usage.

You work in the genomics industry and you process large amounts of genomic data using a nightly Elastic Map Reduce (EMR) job. This job processes a single 3 Tb file which is stored on S3. The EMR job runs on 3 on-demand core nodes and four on-demand task nodes. The EMR job is now taking longer than anticipated and you have been asked to advise how to reduced the completion time? A. Use four Spot Instances for the task nodes rather than four On-Demand instances. B. You should reduce the input split size in the MapReduce job configuration and then adjust the number of simultaneous mapper tasks so that more tasks can be processed at once. C. Store the file on Elastic File Service instead of S3 and then mount EFS as an independent volume for your core nodes. D. Configure an independent VPC in which to run the EMR jobs and then mount EFS as an independent volume for your core nodes. E. Enable termination protection for the job flow.

B. You should reduce the input split size in the MapReduce job configuration and then adjust the number of simultaneous mapper tasks so that more tasks can be processed at once.

Location of Instances are _____ A. Regional B. based on Availability Zone C. Global

B. based on Availability Zone

Security Groups can't _____. A. be nested more than 3 levels B. be nested at all C. be nested more than 4 levels D. be nested more than 2 levels

B. be nested at all

Amazon S3 doesn't automatically give a user who creates a _____ permission to perform other actions on that bucket or object. Therefore, in your IAM policies, you must explicitly give users permission to use the Amazon S3 resources they create. A. file B. bucket or object C. bucket or file D. object or file

B. bucket or object

Amazon Glacier is designed for: (Choose 2 answers) A. active database storage. B. infrequently accessed data. C. data archives. D. frequently accessed data. E. cached session data.

B. infrequently accessed data. C. data archives.

What is the command line instruction for running the remote desktop client in Windows? A. desk.cpl B. mstsc

B. mstsc

Every user you create in the IAM system starts with ______. A. full permissions B. no permissions C. partial permissions

B. no permissions

If your DB instance runs out of storage space or file system resources, its status will change to _____ and your DB Instance will no longer be available. A. storage-overflow B. storage-full C. storage-exceed D. storage-overage

B. storage-full

*Your resources were running fine in AWS, and all of a sudden you notice that something has changed. Your cloud security team told you that some API has changed the state of your resources that were running fine earlier. How do you track who has created the mistake?* * By writing a Lambda function, you can find who has changed what * By using AWS CloudTrail * By using Amazon CloudWatch Events * By using AWS Trusted Advisor

By using AWS CloudTrail * Using AWS CloudTrail, you can find out who has changed what via API.

In the 'Detailed' monitoring data available for your Amazon EBS volumes, Provisioned IOPS volumes automatically send _____ minute metrics to Amazon CloudWatch. A. 5 B. 2 C. 1 D. 3

C. 1

You must assign each server to at least _____ security group? A. 4 B. 3 C. 1 D. 2

C. 1

What is the maximum groups an IAM user be a member of? A. 20 B. 5 C. 10 D. 15

C. 10

What is the default per account limit of Elastic IPs? A. 1 B. 3 C. 5 D. 0

C. 5

What is Amazon Glacier? A. It's a security tool that allows to "freeze" an EC2 instance and perform computer forensics on it. B. A security tool that allows to "freeze" an EBS volume and perform computer forensics on it. C. A low-cost storage service that provides secure and durable storage for data archiving and backup. D. You mean Amazon "Iceberg": it's a low-cost storage service.

C. A low-cost storage service that provides secure and durable storage for data archiving and backup.

What is Amazon Glacier? A. There is no such thing B. A security tool that allows "freezing" an EBS volume to perform computer forensics on it. C. A low-cost storage service that provides secure and durable storage for data archiving and backup. D. A security tool that allows "freezing" an EC2 instance to perform computer forensics on it.

C. A low-cost storage service that provides secure and durable storage for data archiving and backup.

What does Amazon ElastiCache provide? A. A service by this name doesn't exist. Perhaps you mean Amazon CloudCache. B. A virtual server with a huge amount of memory. C. A managed In-memory cache service. D. An Amazon EC2 instance with the Memcached software already pre-installed.

C. A managed In-memory cache service.

What does Amazon Route53 provide? A. A global Content Delivery Network. B. None of these. C. A scalable Domain Name System. D. An SSH endpoint for Amazon EC2.

C. A scalable Domain Name System.

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? A. Multiple Amazon EBS volume with snapshots B. A single Amazon Glacier vault C. A single Amazon S3 bucket D. Multiple instance stores

C. A single Amazon S3 bucket

Which of the following are valid statements about Amazon S3? (Choose two.) A. S3 provides read-after-write consistency for any type of PUT or DELETE B. Consistency is not guaranteed for any type of PUT or DELETE C. A successful response to a PUT request only occurs when a complete object is saved. D. Partially saved objects are immediately readable with a GET after an overwrite PUT. E. S3 provides eventual consistency for overwrite PUTS and DELETES.

C. A successful response to a PUT request only occurs when a complete object is saved. E. S3 provides eventual consistency for overwrite PUTS and DELETES.

What does Amazon CloudFormation provide? A. The ability to setup Autoscaling for Amazon EC2 instances. B. None of these. C. A template resource creation for Amazon Web Services. D. A template to map network resources for Amazon Web Services.

C. A template resource creation for Amazon Web Services.

The _____ service is targeted at organizations with multiple users or systems that use AWS products such as Amazon EC2, Amazon SimpleDB, and the AWS Management Console. A. Amazon RDS B. AWS Integrity Management C. AWS Identity and Access Management D. Amazon EMR

C. AWS Identity and Access Management

What can I access by visiting the URL: http://status.aws.amazon.com/ ? A. Amazon Cloud Watch B. Status of the Amazon RDS DB C. AWS Service Health Dashboard D. AWS Cloud Monitor

C. AWS Service Health Dashboard

You are hosting a MySQL database on the root volume of an EC2 instance. The database is using a large amount of IOPs and you need to increase the IOPs available to it. What should you do? A. Migrate the database to an S3 bucket. B. Migrate the database to Glacier. C. Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes. D. Use Cloud Front to cache the database.

C. Add 4 additional EBS SSD volumes and create a RAID 10 using these volumes.

You've been hired to enhance the overall security posture for a very large e-commerce site They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and DynamoOB for their dynamic data and then archiving nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack? A. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to theirvPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC, B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet. C. Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality.

C. Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group

You are creating your own relational database on an EC2 instance and you need to maximize IOPS performance. What can you do to achieve this goal? A. Add a single additional volume to the EC2 instance with provisioned IOPS. B. Create the database on an S3 bucket. C. Add multiple additional volumes with provisioned IOPS and then create a RAID 0 stripe across those volumes. D. Attach the single volume to multiple EC2 instances so as to maximize performance.

C. Add multiple additional volumes with provisioned IOPS and then create a RAID 0 stripe across those volumes.

What are the initial settings of an user created security group? A. Allow all inbound traffic and Allow no outbound traffic B. Allow no inbound traffic and Allow no outbound traffic C. Allow no inbound traffic and Allow all outbound traffic D. Allow all inbound traffic and Allow all outbound traffic

C. Allow no inbound traffic and Allow all outbound traffic

Which Amazon Storage behaves like raw, unformatted, external block devices that you can attach to your instances? A. None of these. B. Amazon Instance Storage C. Amazon EBS D. All of these

C. Amazon EBS

You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use? A. Amazon DynamoDB B. Amazon Redshift C. Amazon Kinesis D. Amazon Simple Queue Service

C. Amazon Kinesis Key here is "real-time tabulation" and "millions of users"

Fill in the blanks: Resources that are created in AWS are identified by a unique identifier called an _____. A. Amazon Resource Number B. Amazon Resource Name tag C. Amazon Resource Name D. Amazon Reesource Namespace

C. Amazon Resource Name

A t2.medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)? A. An Instance store Hardware Virtual Machine AMI B. An Instance store Paravirtual AMI C. An Amazon EBS-backed Hardware Virtual Machine AMI D. An Amazon EBS-backed Paravirtual AMI

C. An Amazon EBS-backed Hardware Virtual Machine AMI

What action is required to establish a VPC VPN connection between an on-premises data center and an Amazon VPC virtual private gateway? A. Modify the main route table to allow traffic to a network address translation instance. B. Use a dedicated network address translation instance in the public subnet. C. Assign a static Internet-routable IP address to an Amazon VPC customer gateway. D. Establish a dedicated networking connection using AWS Direct Connect.

C. Assign a static Internet-routable IP address to an Amazon VPC customer gateway.

You have an VPC with a public subnet. Three EC2 instances currently running inside the subnet can successfully communicate with other hosts on the internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the Internet. What should you do to enable Internet access? A. Deploy a NAT instance into the public subnet. B. Modify the routing table for the public subnet. C. Assign an elastic IP address to the fourth instance. D. Configure a publicly routable IP address in the host OS of the fourth instance.

C. Assign an elastic IP address to the fourth instance.

Your firm has uploaded a large amount of aerial image data to S3 In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct? A. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage. B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed, C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier. D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier.

You are appointed as your company's Chief Security Officer and you want to be able to track all changes made to your AWS environment, by all users and at all times, in all regions. What AWS service should you use to achieve this? A. CloudAudit B. CloudWatch C. CloudTrail D. CloudDetective

C. CloudTrail

An AWS customer is deploying an application mat is composed of an AutoScaling group of EC2 Instances. The customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instanceid. In addition an x 509 certificates must Designed by the customer's Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements? A. Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot. B. Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the launched instances generate a certificate signature request with the instance's assigned instance-id to the Key management service for signature. C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance. D. Configure the launched instances to generate a new certificate upon first boot Have the Key management service poll the AutoScaling group for associated instances and send new instances a certificate signature (hat contains the specific instance-id.

C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.

You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal. A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access. D. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts

C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.

An AWS customer runs a public blogging website. The site users upload two million blog entries a month The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following recommendations would you make to the customer? A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to Cloud Front identity B. Create a CloudFront distribution with "US'Europe price class for US/Europe users and a different CloudFront distribution with All Edge Locations' for the remaining users. C. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors. D. Create a CloudFronl distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.

C. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors.

You have a high performance compute application and you need to minimize network latency between EC2 instances as much as possible. What can you do to achieve this? A. Use Elastic Load Balancing to load balance traffic between availability zones B. Create a CloudFront distribution and to cache objects from an S3 bucket at Edge Locations. C. Create a placement group within an Availability Zone and place the EC2 instances within that placement group. D. Deploy your EC2 instances within the same region, but in different subnets and different availability zones so as to maximize redundancy.

C. Create a placement group within an Availability Zone and place the EC2 instances within that placement group.

You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance? A. Create a load balancer, and register the Amazon EC2 instance with it B. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin C. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action D. Create a launch configuration from the instance using the CreateLaunchConfiguration action

C. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action using the instance ID to create the AS group creates the launch config automatically

You have an application running on an EC2 Instance which will allow users to download flies from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely? A. Use the AWS account access Keys the application retrieves the credentials from the source code of the application. B. Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user's credentials from the EC2 instance user data. C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata D. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.

C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise's account Theenterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions? A. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account. B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the SaaS provider. C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application. D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.

C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials? A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile. B. Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table. C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance. D. Create an identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.

C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.

A company is running a batch analysis every hour on their main transactional DB. running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift During the execution of the batch their transactional applications are very slow When the batch completes they need to update the top management dashboard with the new data The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible? A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update thedashboard B. Replace ROS with Redsnift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard

Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following. A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. B. Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block. C. Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. D. Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to it from hosts in your application subnets.

C. Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.

You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server's on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose four.) A. End-to-end protection of data in transit B. End-to-end Identity authentication C. Data encryption across the Internet D. Protection of data in transit over the Internet E. Peer identity authentication between VPN gateway and customer gateway F. Data integrity protection across the Internet

C. Data encryption across the Internet D. Protection of data in transit over the Internet E. Peer identity authentication between VPN gateway and customer gateway F. Data integrity protection across the Internet

When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes? A. Data is automatically saved in an EBS volume. B. Data is unavailable until the instance is restarted. C. Data will be deleted and will no longer be accessible. D. Data is automatically saved as an EBS snapshot.

C. Data will be deleted and will no longer be accessible.

You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make? A. Deploy 6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer. B. Deploy 3 EC2 instances in one region and 3 in another region and use Amazon Elastic Load Balancer. C. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer. D. Deploy 2 EC2 instances in three regions and use Amazon Elastic Load Balancer.

C. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer.

You have a business-critical two-tier web app currently deployed in two AZs in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication (very low latency connectivity) at the database layer. The application needs to remain fully available even if one application AZ goes off-line, and Auto Scaling cannot launch new instances in the remaining Availability Zones. How can the current architecture be enhanced to ensure this? A. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 50 percent peak load per Region. B. Deploy in two regions using Weighted Round Robin (WRR), with Auto Scaling minimums set for 100 percent peak load per region. C. Deploy in three Availability Zones, with Auto Scaling minimum set to handle 50 percent peak load per zone. D. Deploy in three Availability Zones, with Auto Scaling minimum set to handle 33 percent peak load per zone.

C. Deploy in three Availability Zones, with Auto Scaling minimum set to handle 50 percent peak load per zone.

When automatic failover occurs, Amazon RDS will emit a DB Instance event to inform you that automatic failover occurred. You can use the _____ to return information about events related to your DB Instance. A. FetchFailure B. DescribeFailure C. DescribeEvents D. FetchEvents

C. DescribeEvents

What is the maximum write throughput I can provision per table for a single DynamoDB table? A. 5,000 us east, 1,000 all other regions B. 100,000 us east, 10, 000 all other regions C. Designed to scale without limits, but if you go beyond 40,000 us east/10,000 all other regions you have to contact AWS first. D. There is no limit

C. Designed to scale without limits, but if you go beyond 40,000 us east/10,000 all other regions you have to contact AWS first.

What is the maximum write throughput I can provision for a single Dynamic DB table? A. 1,000 write capacity units B. 100,000 write capacity units C. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first. D. 10,000 write capacity units

C. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first. 40,000 in Virginia!

Which of the services below do you get root access to? A. Elasticache & Elastic MapReduce B. RDS & DynamoDB C. EC2 & Elastic MapReduce D. Elasticache & DynamoDB

C. EC2 & Elastic MapReduce

Which AWS instance address has the following characteristics? :"If you stop an instance, its Elastic IP address is unmapped, and you must remap it when you restart the instance." A. None of these B. EC2-VPC Addresses C. EC2-Classic Addresses

C. EC2-Classic Addresses

By default, when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the _____ Service to set the drive letters of the EBS volumes per your specifications. A. EBSConfig Service B. AMIConfig Service C. Ec2Config Service D. Ec2-AMIConfig Service

C. Ec2Config Service

Please select the Amazon EC2 resource which cannot be tagged. A. Images (AMIs, kernels, RAM disks) B. Amazon EBS volumes C. Elastic IP addresses D. VPCs

C. Elastic IP addresses

Which of the following features ensures even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer? A. Elastic Load Balancing request routing B. An Amazon Route 53 weighted routing policy C. Elastic Load Balancing cross-zone load balancing D. An Amazon Route 53 latency routing policy

C. Elastic Load Balancing cross-zone load balancing

Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'? A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2. B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2. C. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3. D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoded videos from Glacier.

C. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.

In Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space? A. FreeStorage B. FreeStorageVolume C. FreeStorageSpace D. FreeStorageAllocation

C. FreeStorageSpace

You are a solutions architect working for a large digital media company. Your company is migrating their production estate to AWS and you are in the process of setting up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators. What further steps do you need to take to enable your system administrators to get access to the AWS console? A. Generate an Access Key ID & Secret Access Key, and give these to your system administrators. B. Enable multi-factor authentication on their accounts and define a password policy. C. Generate a password for each user created and give these passwords to your system administrators. D. Give the system administrators the secret access key and access key id, and tell them to use these credentials to log in to the AWS console.

C. Generate a password for each user created and give these passwords to your system administrators.

You have uploaded a file to S3. What HTTP code would indicate that the upload was successful? A. HTTP 404 B. HTTP 501 C. HTTP 200 D. HTTP 307

C. HTTP 200

Select the incorrect statement. A. In Amazon EC2, private IP address is only returned to Amazon EC2 when the instance is stopped or terminated B. In Amazon VPC, an instance retains its private IP address when the instance is stopped. C. In Amazon VPC, an instance does NOT retain its private IP address when the instance is stopped. D. In Amazon EC2, the private IP address is associated exclusively with the instance for its lifetime

C. In Amazon VPC, an instance does NOT retain its private IP address when the instance is stopped.

Read Replicas require a transactional storage engine and are only supported for the _____ storage engine. A. OracleISAM B. MSSQLDB C. InnoDB D. MyISAM

C. InnoDB

What does the "Server Side Encryption" option on Amazon S3 provide? A. It provides an encrypted virtual disk in the Cloud. B. It doesn't exist for Amazon S3, but only for Amazon EC2. C. It encrypts the files that you send to Amazon S3, on the server side. D. It allows to upload files using an SSL endpoint, for a secure transfer.

C. It encrypts the files that you send to Amazon S3, on the server side.

You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web applications database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database. A. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests. B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ. C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica. D. Generate the reports by querying the ElastiCache database caching tier.

C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica.

If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should: A. Launch the instance from a private Amazon Machine Image (AMI). B. Assign a group of sequential Elastic IP address to the instances. C. Launch the instances in the Amazon Virtual Private Cloud (VPC). D. Launch the instances in a Placement Group. E. Use standard EC2 instances since each instance gets a private Domain Name Service (DNS) already.

C. Launch the instances in the Amazon Virtual Private Cloud (VPC).

Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data? A. Maintain two snapshots: the original snapshot and the latest incremental snapshot. B. Maintain a volume snapshot; subsequent snapshots will overwrite one another C. Maintain a single snapshot the latest snapshot is both Incremental and complete. D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.

C. Maintain a single snapshot the latest snapshot is both Incremental and complete.

In the Launch Db Instance Wizard, where can I select the backup and maintenance options? A. DB Instance Details B. Review C. Management Options D. Engine Selection

C. Management Options

In reviewing the Auto Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for cost while preserving elasticity? Choose 2 answers A. Modify the Auto Scaling policy to use scheduled scaling actions B. Modify the Auto Scaling group termination policy to terminate the oldest instance first. C. Modify the Auto Scaling group cool-down timers. D. Modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy. E. Modify the Auto Scaling group termination policy to terminate the newest instance first.

C. Modify the Auto Scaling group cool-down timers. D. Modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.

In reviewing the Auto Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for cost while preserving elasticity? Choose 2 A. Modify the Auto Scaling policy to use scheduled scaling actions B. Modify the Auto Scaling group termination policy to terminate the oldest instance first. C. Modify the Auto Scaling group cool-down timers. D. Modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy. E. Modify the Auto Scaling group termination policy to terminate the newest instance first.

C. Modify the Auto Scaling group cool-down timers. D. Modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.

A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated? A. A single facility in eu-west-1 and a single facility in eu-central-1 B. A single facility in eu-west-1 and a single facility in us-east-1 C. Multiple facilities in eu-west-1 D. A single facility in eu-west-1

C. Multiple facilities in eu-west-1

Security groups act like a firewall at the instance level, whereas _____ are an additional layer of security that act at the subnet level. A. DB Security Groups B. VPC Security Groups C. Network ACLs

C. Network ACLs

What function of an AWS VPC is stateless? A. Security Groups B. Elastic Load Balancers C. Network Access Control Lists D. EC2

C. Network Access Control Lists

Is decreasing the storage size of a DB Instance permitted? A. Depends on the RDMS used B. Yes C. No

C. No

Is decreasing the storage size of a DB Instance permitted? A. Depends on the RDMS used B. Yes C. No

C. No

Is there a method or command in the IAM system to allow or deny access to a specific instance? A. Only for VPC based instances B. Yes C. No

C. No

What is the charge for the data transfer incurred in replicating data between your primary and standby? A. Same as the standard data transfer charge B. Double the standard data transfer charge C. No charge. It is free D. Half of the standard data transfer charge

C. No charge. It is free

Do the system resources on the Micro instance meet the recommended configuration for Oracle? A. Yes completely B. Yes but only for certain situations C. Not in any circumstance

C. Not in any circumstance

Do the system resources on the Micro instance meet the recommended configuration for Oracle? A. Yes completely B. Yes but only for certain situations C. Not in any circumstance

C. Not in any circumstance

Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short time-frame required? A. Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website. B. Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted In AWS. C. Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache. D. Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.

C. Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.

What is the minimum time Interval for the data that Amazon CloudWatch receives and aggregates? A. One second B. Five seconds C. One minute D. Three minutes E. Five minutes

C. One minute

You are a solutions architect working for a large oil and gas company. Your company runs their production environment on AWS and has a custom VPC. The VPC contains 3 subnets, 1 of which is public and the other 2 are private. Inside the public subnet is a fleet of EC2 instances which are the result of an autoscaling group. All EC2 instances are in the same security group. Your company has created a new custom application which connects to mobile devices using a custom port. This application has been rolled out to production and you need to open this port globally to the internet. What steps should you take to do this, and how quickly will the change occur? A. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate on this port after a reboot. B. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate over this port immediately. C. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port immediately. D. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port as soon as the relevant Time To Live (TTL) expires.

C. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port immediately.

With which AWS orchestration service can you implement Chef recipes? A. CloudFormation B. Elastic Beanstalk C. Opsworks D. Lambda

C. Opsworks

In the Amazon RDS Oracle DB engine, the Database Diagnostic Pack and the Database Tuning Pack are only available with _____. A. Oracle Standard Edition B. Oracle Express Edition C. Oracle Enterprise Edition D. None of these

C. Oracle Enterprise Edition

The Trusted Advisor service provides insight regarding which four categories of an AWS account? A. Security, fault tolerance, high availability, and connectivity B. Security, access control, high availability, and performance C. Performance, cost optimization, security, and fault tolerance D. Performance, cost optimization, access control, and connectivity

C. Performance, cost optimization, security, and fault tolerance

You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose two.) A. Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address. B. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution. C. Place all your web servers behind an ELB. Configure a Route53 CNAME to point to the ELB DNS name. D. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs with health checks and DNS failover. E. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.

C. Place all your web servers behind an ELB. Configure a Route53 CNAME to point to the ELB DNS name. D. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs with health checks and DNS failover.

All Amazon EC2 instances are assigned two IP addresses at launch. Which one can only be reached from within the Amazon EC2 network? A. Multiple IP address B. Public IP address C. Private IP address D. Elastic IP Address

C. Private IP address

You work for a market analysis firm who are designing a new environment. They will ingest large amounts of market data via Kinesis and then analyze this data using Elastic Map Reduce. The data is then imported in to a high performance NoSQL Cassandra database which will run on EC2 and then be accessed by traders from around the world. The database volume itself will sit on 2 EBS volumes that will be grouped into a RAID 0 volume. They are expecting very high demand during peak times, with an IOPS performance level of approximately 15,000. Which EBS volume should you recommend? A. Magnetic B. General Purpose SSD C. Provisioned IOPS (PIOPS) D. Turbo IOPS (TIOPS)

C. Provisioned IOPS (PIOPS)

Out of the striping options available for the EBS volumes, which one has the following disadvantage : 'Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.' ? A. Raid 5 B. Raid 6 C. Raid 1 D. Raid 2

C. Raid 1

Out of the striping options available for the EBS volumes, which one has the following disadvantage : 'Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.' ? A. Raid 5 B. Raid 6 C. Raid 1+0 (Raid 10) D. Raid 1 E. Raid 2

C. Raid 1+0 (Raid 10)

Amazon S3 buckets in all Regions provide which of the following? A. Read-after-write consistency for PUTS of new objects AND Strongly consistent for POST & DELETES B. Read-after-write consistency for POST of new objects AND Eventually consistent for overwrite PUTS & DELETES C. Read-after-write consistency for PUTS of new objects AND Eventually consistent for overwrite PUTS & DELETES D. Read-after-write consistency for POST of new objects AND Strongly consistent for POST & DELETES

C. Read-after-write consistency for PUTS of new objects AND Eventually consistent for overwrite PUTS & DELETES

What does the ec2-revoke command do with respect to the Amazon EC2 security groups? A. Removes one or more security groups from a rule. B. Removes one or more security groups from an Amazon EC2 instance. C. Removes one or more rules from a security group. D. Removes a security group from an account.

C. Removes one or more rules from a security group.

What does the following command do with respect to the Amazon EC2 security groups? ec2-revoke RevokeSecurityGroupIngress A. Removes one or more security groups from a rule. B. Removes one or more security groups from an Amazon EC2 instance. C. Removes one or more rules from a security group. D. Removes a security group from our account.

C. Removes one or more rules from a security group.

It is advised that you watch the Amazon CloudWatch _____ metric carefully and recreate the Read Replica should it fall behind due to replication errors. A. WriteLag B. ReadReplica C. ReplicaLag D. SingleReplica

C. ReplicaLag

Can Amazon S3 uploads resume on failure or do they need to restart? A. Restart from beginning B. You can resume them, if you flag the "resume on failure" option before uploading. C. Resume on failure D. Depends on the file size

C. Resume on failure

What are characteristics of Amazon S3? (Choose two.) A. S3 allows you to store objects of virtually unlimited size. B. S3 offers Provisioned IOPS. C. S3 allows you to store unlimited amounts of data. D. S3 should be used to host a relational database. E. Objects are directly accessible via a URL.

C. S3 allows you to store unlimited amounts of data. E. Objects are directly accessible via a URL.

An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will: (Choose two.) A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance. B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected. C. Send an SNS notification, if configured to do so. D. Terminate an instance in the AZ which currently has 2 running EC2 instances. E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ.

C. Send an SNS notification, if configured to do so. D. Terminate an instance in the AZ which currently has 2 running EC2 instances.

You have been asked to identify a service on AWS that is a durable key value store. Which of the services below meets this definition? A. Mobile Hub B. Kinesis C. Simple Storage Service (S3) D. Elastic File Service (EFS)

C. Simple Storage Service (S3)

What does Amazon SWF stand for? A. Simple Wireless Forms B. Simple Web Form C. Simple Work Flow D. Simple Web Flow

C. Simple Work Flow

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true? A. The instance is replaced automatically by the ELB. B. The instance gets terminated automatically by the ELB. C. The ELB stops sending traffic to the instance that failed its health check. D. The instance gets quarantined by the ELB for root cause analysis.

C. The ELB stops sending traffic to the instance that failed its health check.

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true? A. The instance is replaced automatically by the ELB. B. The instance gets terminated automatically by the ELB. C. The ELB stops sending traffic to the instance that failed its health check. D. The instance gets quarantined by the ELB for root cause analysis.

C. The ELB stops sending traffic to the instance that failed its health check.

A large real-estate brokerage is exploring the option o(adding a cost-effective location based alert to their existing mobile application The application backend infrastructure currently runs on AWS Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the us Which one of the following architectural suggestions would you make to the customer? A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application. B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications ' location through carrier connection: ROS will be used to store and relevant relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB AWS Mobile Push will be used to send offers to the mobile application D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.

C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB AWS Mobile Push will be used to send offers to the mobile application

Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service? A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials. B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP. C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials. D. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated. E. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.

C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.

A US-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents? A. Use a public-facing load balancer per region to load-balance web traffic, and enable HTTP health checks. B. Use a public-facing load balancer per region to load-balance web traffic, and enable sticky sessions. C. Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions. D. Use Amazon Route 53, and apply a weighted routing policy to distribute traffic across both regions.

C. Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions.

A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate? A. Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer. B. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index. C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. D. Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL. E. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and useRoute53 with DNS round-robin.

C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.

You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? (Choose three.) A. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth. B. Use dedicated instances to ensure that each instance has the maximum performance possible. C. Use an Amazon CloudFront distribution for both static and dynamic content. D. Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational Database Service (RDS) tiers E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization. F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.

C. Use an Amazon CloudFront distribution for both static and dynamic content. D. Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational Database Service (RDS) tiers E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization.

Your department creates regular analytics reports from your company's log files All log data is collected in Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? A. Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. B. Use reduced redundancy storage (RRS) for PDF and .csv data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. D. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2 8x large Instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? A. Create more smaller flies on Amazon S3. B. Add additional cc2 8x large instances by introducing a task group. C. Use smaller instances that have higher aggregate I/O performance. D. Create fewer, larger files on Amazon S3.

C. Use smaller instances that have higher aggregate I/O performance.

Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system? A. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level. B. Use Route 53 latency based-routing to send high priority tasks to the closest ransformation instances. C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue. D. Use a single SQS queue. Each message contains the priority level. Transformation instances poll highpriority messages first.

C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.

Your company has recently extended its datacenter into a VPC on AVVS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console Which option below will meet the needs for your NOC members? A. Use OAuth 2 0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AVVS Management Console. B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console. C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. D. Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.

C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this? A. By using Ipconfig for windows or Ifconfig for Linux. B. By using a cloud watch metric. C. Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/ D. Using a Curl or Get Command to get the latest user-data from http://169.254.169.254/latest/user-data/

C. Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/

Can I control if and when MySQL based RDS Instance is upgraded to new supported versions? A. No B. Only in VPC C. Yes

C. Yes

Can I delete a snapshot of the root device of an EBS volume used by a registered AMI? A. Only via API B. Only via Console C. Yes D. No

C. Yes

Can I initiate a "forced failover" for my MySQL Multi-AZ DB Instance deployment? A. Only in certain regions B. Only in VPC C. Yes D. No

C. Yes

Do the Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance? A. No B. Only if instructed to when created C. Yes

C. Yes

Do the Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance? A. No B. Only if instructed to when created C. Yes

C. Yes

Does DynamoDB support in-place atomic updates? A. It is not defined B. No C. Yes D. It does support in-place non-atomic updates

C. Yes

Will I be alerted when automatic failover occurs? A. Only if SNS configured B. No C. Yes D. Only if Cloudwatch configured

C. Yes

Is there a limit to how many groups a user can be in? A. Yes for all users except root B. Yes unless special permission granted C. Yes for all users D. No

C. Yes for all users

Is there a limit to how many groups a user can be in? A. Yes for all users except root B. Yes unless special permission granted C. Yes for all users D. No

C. Yes for all users

Are you able to integrate a multi-factor token service with the AWS Platform? A. No, you cannot integrate multi-factor token devices with the AWS platform. B. Yes, you can integrate private multi-factor token devices to authenticate users to the AWS platform. C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

C. Yes, using the AWS multi-factor token devices to authenticate users on the AWS platform.

Select the correct statement: A. You don't need not specify the resource identifier while stopping a resource B. You can terminate, stop, or delete a resource based solely on its tags C. You can't terminate, stop, or delete a resource based solely on its tags D. You don't need to specify the resource identifier while terminating a resource

C. You can't terminate, stop, or delete a resource based solely on its tags

SQL Server __________ store logins and passwords in the master database. A. can be configured to but by default does not B. doesn't C. does

C. does

While creating the snapshots using the the command line tools, which command should I be using? A. ec2-deploy-snapshot B. ec2-fresh-snapshot C. ec2-create-snapshot D. ec2-new-snapshot

C. ec2-create-snapshot

Changes to the backup window take effect ______. A. from the next billing cycle B. after 30 minutes C. immediately D. after 24 hours

C. immediately

The one-time payment for Reserved Instances is __________ refundable if the reservation is cancelled. A. always B. in some circumstances C. never

C. never non-refundable

Every user you create in the IAM system starts with ______. A. partial permissions B. full permissions C. no permissions

C. no permissions

Amazon RDS creates an SSL certificate and installs the certificate on the DB Instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The _____ is stored at https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem. A. private key B. foreign key C. public key D. protected key

C. public key

Fill in the blanks : _____ let you categorize your EC2 resources in different ways, for example, by purpose, owner, or environment. A. wildcards B. pointers C. tags D. special filters

C. tags

To help you manage your Amazon EC2 instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of_____. A. special filters B. functions C. tags D. wildcards

C. tags

*You are running all your AWS resrouces in the US-East region, and you are not leveraging a second region using AWS. However, you want to keep your infrastructure as code so that you should be able to fail over to a different region if any DR happens. Which AWS service will you choose to provision the resources in a second region that looks identical to your resources in the US-East region?* * Amazon EC2, VPC, and RDS * Elastic Beanstalk * OpsWorks * CloudFormation

CloudFormation * Using CloudFormation, you can keep the infrastructure as code, and you can create a CloudFormation template to mimic the setup in an existing region and can deploy the CloudFormation template in a different region to create the resource.

*Recently you had a big outage on your web site because of a DDoS attack, and you lost a big chunk of revenue since your application was down for some time. You don't want to take any chances and would like to secure your web site from any future DDoS attacks. Which AWS service can help you achieve your goal?* (Choose two.) * CloudFront * Web Application Firewall * Config * Trusted Advisor

CloudFront, Web Application Firewall * CloudFront can be integrated with WAF, which can protect against a DDoS attack. Config helps you monitor the change in state of existing AWS resources, which has nothing to do with DDoS attacks. Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment.

*You are running a three-tier web application on AWS. Currently you don't have a reporting system in place, and your application owner is asking you to add a reporting tier to the application. You are concerned that if you add a reporting layer on top of the existing database, the performance is going to be degraded. Today you are using a multi-AZ RDS MySQL to host the database. What can you do to add the reporting layer at the same time making sure there is no performance impact to the existing application?* * Use the Database Migration Service and move the database to a different region and run reporting form there. * Create a read replica of the RDS MySQL and run reporting from the read replica. * Export the data from RDS MySQL to S3 and run the reporting from S3. * Use the standby database for running the reporting.

Create a read replica of the RDS MySQL and run reporting from the read replica. * RDS MySQL provides the ability to create up to 15 read replicas to offload the read-only workload. This can be used for reporting purposes as well. -------------------- When the capability comes built into RDS, why use the Database Migration Serve? Exporting the data to S3 is going to be painful, and are you going to export the data everyday? The standby database is not open, so you can't run reporting from it.

*You are building a system to distribute training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3 but not publicly accessible from S3 directly?* * Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OAI * Add CloudFront account security group called "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy * Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM user * Create an S3 bucket policy that lists the CloudFront distribution ID as the principal and target bucket as the Amazon resrouce name (ARN)

Create an origin access identity (OAI) for CloudFront and grant access to the object in your S3 bucket to that OA

If you are using Amazon RDS Provisioned IOPS storage with MySQL and Oracle database engines, you can scale the throughput of your database Instance by specifying the IOPS rate from _____ . A. 1,000 to 1,00,000 B. 100 to 1,000 C. 10,000 to 1,00,000 D. 1,000 to 10,000

D. 1,000 to 10,000

You must increase storage size in increments of at least _____ % A. 40 B. 20 C. 50 D. 10

D. 10

A Provisioned IOPS volume must be at least __________ GB in size: A. 1 B. 50 C. 20 D. 10

D. 10 4GiB is the actual minimum

What is the maximum key length of a tag? A. 512 Unicode characters B. 64 Unicode characters C. 256 Unicode characters D. 128 Unicode characters

D. 128 Unicode characters

A Provisioned IOPS SSD volume must be at least _____ GB in size. A. 1 B. 6 C. 20 D. 4

D. 4

A company wants to implement their website in a virtual private cloud (VPC). The web tier will use an AutoScaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible. What is the minimum number of subnets that need to be configured in the VPC? A. 1 B. 2 C. 3 D. 4

D. 4

You have been asked to create VPC for your company. The VPC must support both Internet-facing web applications (ie they need to be publicly accessible) and internal private applications (i.e. they are not publicly accessible and can be accessed only over VPN). The internal private applications must be inside a private subnet. Both the internet-facing and private applications must be able to leverage at least three Availability Zones for high availability. At a minimum, how many subnets must you create within your VPC to achieve this? A. 5 B. 3 C. 4 D. 6

D. 6

You have been tasked with creating a VPC network topology for your company. The VPC network must support both Internet-facing applications and internally-facing applications accessed only over VPN. Both Internet-facing and internally-facing applications must be able to leverage at least three AZs for high availability. At a minimum, how many subnets must you create within your VPC to accommodate these requirements? A. 2 B. 3 C. 4 D. 6

D. 6

Within the IAM service a GROUP is regarded as a: A. A collection of AWS accounts B. It's the group of EC2 machines that gain the permissions specified in the GROUP. C. There's no GROUP in IAM, but only USERS and RESOURCES. D. A collection of users.

D. A collection of users.

A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named "company-backup"? A. A custom bucket policy limited to the Amazon S3 API in thee Amazon Glacier archive "company-backup" B. A custom bucket policy limited to the Amazon S3 API in "company-backup" C. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive "company-backup". D. A custom IAM user policy limited to the Amazon S3 API in "company-backup".

D. A custom IAM user policy limited to the Amazon S3 API in "company-backup".

You have started a new role as a solutions architect for an architectural firm that designs large sky scrapers in the Middle East. Your company hosts large volumes of data and has about 250Tb of data on internal servers. They have decided to store this data on S3 due to the redundancy offered by it. The company currently has a telecoms line of 2Mbps connecting their head office to the internet. What method should they use to import this data on to S3 in the fastest manner possible. A. Upload it directly to S3 B. Purchase and AWS Direct connect and transfer the data over that once it is installed. C. AWS Data pipeline D. AWS Import/Export

D. AWS Import/Export

In Identity and Access Management, when you first create a new user, certain security credentials are automatically generated. Which of the below are valid security credentials? A. Access Key ID, Authorized Key B. Private Key, Secret Access Key C. Private Key, Authorized Key D. Access Key ID, Secret Access Key

D. Access Key ID, Secret Access Key

While launching an RDS DB instance, on which page I can select the Availability Zone? A. Review B. DB Instance Details C. Management Options D. Additional Configuration

D. Additional Configuration

If you're unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity? A. Adjust Security Group to permit egress traffic over TCP port 443 from your IP. B. Configure the IAM role to permit changes to security group settings. C. Modify the instance security group to allow ingress of ICMP packets from your IP. D. Adjust the instance's Security Group to permit ingress traffic over port 22 from your IP. E. Apply the most recently released Operating System security patches.

D. Adjust the instance's Security Group to permit ingress traffic over port 22 from your IP.

IAM provides several policy templates you can use to automatically assign permissions to the groups you create. The _____ policy template gives the Admins group permission to access all account resources, except your AWS account information. A. Read Only Access B. Power User Access C. AWS CloudFormation Read Only Access D. Administrator Access

D. Administrator Access

Through which of the following interfaces is AWS Identity and Access Management available? 1) AWS Management Console 2) Command line interface (CLI) 3) IAM Query API 4) Existing libraries A. Only through Command line interface (CLI) B. 1, 2 and 3 C. 1 and 3 D. All of the above

D. All of the above

Through which of the following interfaces is AWS Identity and Access Management available? A. AWS Management Console B. Command line interface (CLI) C. IAM Query API D. All of the above

D. All of the above

You are configuring your company's application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency? A. AWS ElastiCache Memcached B. Amazon Simple Storage Service C. Amazon EC2 instance storage D. Amazon DynamoDB

D. Amazon DynamoDB

Which Amazon storage do you think is the best for my database-style applications that frequently encounter many random reads and writes across the dataset. A. None of these B. Amazon Instance Storage C. Any of these D. Amazon EBS

D. Amazon EBS

A client application requires operating system privileges on a relational database server. What is an appropriate configuration for a highly available database architecture? A. A standalone Amazon EC2 instance B. Amazon RDS in a Multi-AZ configuration C. Amazon EC2 instances in a replication configuration utilizing a single Availability Zone D. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones

D. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones

Without _____, you must either create multiple AWS accounts, each with its own billing and subscriptions, or your employees must share the security credentials of a single AWS account. A. Amazon RDS B. Amazon Glacier C. Amazon EMR D. Amazon IAM

D. Amazon IAM

When you resize the Amazon RDS DB instance, Amazon RDS will perform the upgrade during the next maintenance window. If you would rather perform the change now, specify the _____ option. A. ApplyNow B. ApplySoon C. ApplyThis D. ApplyImmediately

D. ApplyImmediately

You are designing a multi-platform web application for AWS The application will run on EC2 instances and will be accessed from PCs. tablets and smart phones Supported accessing platforms are Windows. MACOS. IOS and Android Separate sticky session and SSL certificate setups are required for different platform types which of the following describes the most cost effective and performance efficient architecture setup? A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform. C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform. D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

You have decided to change the instance type for instances running in your application tier that is using Auto Scaling. In which area below would you change the instance type definition? A. Auto Scaling policy B. Auto Scaling group C. Auto Scaling tags D. Auto Scaling launch configuration

D. Auto Scaling launch configuration

How can the domain's zone apex for example "myzoneapexdomain com" be pointed towards an Elastic Load Balancer? A. By using an AAAA record B. By using an A record C. By using an Amazon Route 53 CNAME record D. By using an Amazon Route 53 Alias record

D. By using an Amazon Route 53 Alias record

A web company is looking to implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC, How should they architect their solution to achieve these goals? A. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC, B. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides. C. Configure servers running in the VPC using the host-based 'route' commands to send all traffic through the platform to a scalable virtualized IDS/IPS. D. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.

D. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.

Amazon SWF is designed to help users do what? A. Design graphical user interface interactions B. Manage user identification and authorization C. Store Web content D. Coordinate synchronous and asynchronous tasks which are distributed and fault tolerant.

D. Coordinate synchronous and asynchronous tasks which are distributed and fault tolerant.

After creating a new IAM user which of the following must be done before they can successfully make API calls? A. Add a password to the user. B. Enable Multi-Factor Authentication for the user. C. Assign a Password Policy to the user. D. Create a set of Access Keys for the user.

D. Create a set of Access Keys for the user.

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? A. Create an A record pointing to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name. C. Create a CNAME record aliased to the load balancer DNS name. D. Create an A record aliased to the load balancer DNS name

D. Create an A record aliased to the load balancer DNS name

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? A. Create an A record pointing to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name. C. Create a CNAME record aliased to the load balancer DNS name. D. Create an A record aliased to the load balancer DNS name

D. Create an A record aliased to the load balancer DNS name A CNAME cannot be used at the DNS zone apex!

A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them. Which activity would be useful in defending against this attack? A. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (Internet Gateway) B. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP C. Create 15 Security Group rules to block the attacking IP addresses over port 80 D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses

D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses

When an EC2 instance that is backed by an S3-based AMI is terminated, what happens to the data on the root volume? A. Data is automatically saved as an EBS snapshot. B. Data is automatically saved as an EBS volume. C. Data is unavailable until the instance is restarted. D. Data is automatically deleted.

D. Data is automatically deleted.

A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement? A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC, B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere. C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses. D. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.

D. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.

You manually launch a NAT AMI in a public subnet. The network is properly configured. Security groups and network access control lists are property configured. Instances in a private subnet can access the NAT. The NAT can access the Internet. However, private instances cannot access the Internet. What additional step is required to allow access from the private instances? A. Enable Source/Destination Check on the private Instances. B. Enable Source/Destination Check on the NAT instance. C. Disable Source/Destination Check on the private instances. D. Disable Source/Destination Check on the NAT instance.

D. Disable Source/Destination Check on the NAT instance.

What does Amazon EBS stand for? A. Elastic Block Storage. B. Elastic Business Server. C. Elastic Blade Server. D. Elastic Block Store.

D. Elastic Block Store.

What's an ECU? A. Extended Cluster User. B. None of these. C. Elastic Computer Usage. D. Elastic Compute Unit

D. Elastic Compute Unit

What does Amazon ELB stand for? A. Elastic Linux Box B. Encrypted Linux Box C. Encrypted Load Balancing D. Elastic Load Balancer

D. Elastic Load Balancer

You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application servers and a database server. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request. How would you implement the architecture on AWS in order to maximize scalability and high availability? A. File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs. B. File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs. C. File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs. D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.

D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.

While signing in REST/ Query requests, for additional security, you should transmit your requests using Secure Sockets Layer (SSL) by using _____. A. HTTP B. Internet Protocol Security(IPsec) C. TLS (Transport Layer Security) D. HTTPS

D. HTTPS

What happens to the I/O operations while you take a database snapshot? A. I/O operations to the database are suspended for an hour while the backup is in progress. B. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress. C. I/O operations will be functioning normally D. I/O operations to the database are suspended for a few minutes while the backup is in progress.

D. I/O operations to the database are suspended for a few minutes while the backup is in progress.

What is a Security Group? A. None of these. B. A list of users that can access Amazon EC2 instances. C. An Access Control List (ACL) for AWS resources. D. It acts as a virtual firewall that controls the traffic for one or more instances.

D. It acts as a virtual firewall that controls the traffic for one or more instances.

A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? (Choose three.) A. Use a HTTPS GET to the Amazon S3 bucket where the files are located. B. Restore by implementing a lifecycle policy on the Amazon S3 bucket. C. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours. D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot. E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance. F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot.

D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot. E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance. F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot.

In the context of MySQL, version numbers are organized as MySQL version = X.Y.Z. What does X denote here? A. release level B. minor version C. version number D. major version

D. Major Version X is always major Y is sometimes major, sometimes minor Z is minor

Per the AWS Acceptable Use Policy, penetration testing of EC2 instances: A. May be performed by AWS, and will be performed by AWS upon customer request. B. May be performed by AWS, and is periodically performed by AWS. C. Are expressly prohibited under all circumstances. D. May be performed by the customer on their own instances with prior authorization from AWS. E. May be performed by the customer on their own instances, only if performed from EC2 instances

D. May be performed by the customer on their own instances with prior authorization from AWS.

You work for a toy company that has a busy online store. As you are approaching christmas you find that your store is getting more and more traffic. You ensure that the web tier of your store is behind an Auto Scaling group, however you notice that the web tier is frequently scaling, sometimes multiple times in an hour, only to scale back after peak usage. You need to prevent this so that Auto Scaling does not scale as rapidly, just to scale back again. What option would help you to achieve this? A. Configure Auto Scaling to terminate your oldest instances first, then adjust your CloudWatch alarm. B. Configure Auto Scaling to terminate your newest instances first, then adjust your CloudWatch alarm. C. Change your Auto Scaling so that it only scales at scheduled times. D. Modify the Auto Scaling group cool-down timers & modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.

D. Modify the Auto Scaling group cool-down timers & modify the Amazon CloudWatch alarm period that triggers your Auto Scaling scale down policy.

Is creating a Read Replica of another Read Replica supported? A. Only in certain regions B. Only with MSSQL based RDS C. Only for Oracle RDS types D. No

D. No

When running my DB Instance as a Multi-AZ deployment, can I use the standby for read or write operations? A. Yes B. Only with MSSQL based RDS C. Only for Oracle RDS instances D. No

D. No

Will my standby RDS instance be in the same Availability Zone as my primary? A. Only for Oracle RDS types B. Only if configured at launch C. Yes D. No

D. No

Is creating a Read Replica of another Read Replica supported? A. Only in VPC B. Yes C. Only in certain regions D. No

D. No Old question outdated. Now MySQL and MariaDB support making a second-tier RR. Has a higher latency.

When running my DB Instance as a Multi-AZ deployment, can I use the standby for read or write operations? A. Yes B. Only with MSSQL based RDS C. Only for Oracle RDS instances D. No

D. No Standby only for backup

What does Amazon S3 stand for? A. Simple Storage Solution. B. Storage Storage Storage (triple redundancy Storage). C. Storage Server Solution. D. Simple Storage Service.

D. Simple Storage Service.

What does Amazon S3 stand for? A. Simple Storage Solution. B. Storage Storage Storage (triple redundancy Storage). C. Storage Server Solution. D. Simple Storage Service.

D. Simple Storage Service.

When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes. A. Depends on the instance type B. FALSE C. Depends on whether you use API call D. TRUE

D. TRUE

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances, if an instance fails to pass health checks, which statement will be true? A. The instance gets terminated automatically by the ELB B. The instance gets quarantined by the ELB for root cause analysis. C. The instance is replaced automatically by the ELB D. The ELB stops sending traffic to the instance that failed its health check.

D. The ELB stops sending traffic to the instance that failed its health check.

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud- based applications. What is the monthly charge for using the public data sets? A. A 1 time charge of $10 for all the datasets. B. $1 per dataset per month C. $10 per month for all the datasets D. There is no charge for using the public data sets

D. There is no charge for using the public data sets

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud- based applications. What is the monthly charge for using the public data sets? A. A 1 time charge of 10$ for all the datasets. B. 1$ per dataset per month C. 10$ per month for all the datasets D. There is no charge for using the public data sets

D. There is no charge for using the public data sets

You have an application running in us-west-2 that requires six EC2 instances running at all times. With three AZs available in that region (us-west-2a, us-west-2b, and us-west-2c), which of the following deployments provides 100 percent fault tolerance if any single AZ in us-west-2 becomes unavailable? Choose 2 answers A. Us-west-2a with two EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances B. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with no EC2 instances C. Us-west-2a with four EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances D. Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances E. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances

D. Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances E. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances

You have an application running in us-west-2 that requires six EC2 instances running at all times. With three AZs available in that region (us-west-2a, us-west-2b, and us-west-2c), which of the following deployments provides 100 percent fault tolerance if any single AZ in us-west-2 becomes unavailable? Choose 2 answers A. Us-west-2a with two EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances B. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with no EC2 instances C. Us-west-2a with four EC2 instances, us-west-2b with two EC2 instances, and us-west-2c with two EC2 instances D. Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances E. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances

D. Us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances E. Us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances

A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use? A. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance. B. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote. C. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table. D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login. With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.

D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login. With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.

You're running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case? A. Use Storage Gateway and configure it to use Gateway Cached volumes. B. Configure your backup software to use S3 as the target for your data backups. C. Configure your backup software to use Glacier as the target for your data backups. D. Use Storage Gateway and configure it to use Gateway Stored volumes.

D. Use Storage Gateway and configure it to use Gateway Stored volumes.

What are the two permission types used by AWS? A. Resource-based and Product-based B. Product-based and Service-based C. Service-based D. User-based and Resource-based

D. User-based and Resource-based

To view information about an Amazon EBS volume, open the Amazon EC2 console, go to EC2, click _____ in the Navigation pane. A. EBS B. Describe C. Details D. Volumes

D. Volumes

Which of the following is not a service of the security category of the AWS trusted advisor service? A. Security Groups - Specific Ports Unrestricted B. MFA on Root Account C. IAM Use D. Vulnerability scans on existing VPCs.

D. Vulnerability scans on existing VPCs.

A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations? A. SAML-based Identity Federation B. Cross-Account Access C. AWS Identity and Access Management roles D. Web Identity Federation

D. Web Identity Federation

Does Amazon Route 53 support NS Records? A. Yes, it supports Name Service records. B. No C. It supports only MX records. D. Yes, it supports Name Server records.

D. Yes, it supports Name Server records.

A _____ is a storage device that moves data in sequences of bytes or bits (blocks). Hint: These devices support random access and generally use buffered I/O. A. block map B. storage block C. mapping device D. block device

D. block device

To retrieve instance metadata or userdata you will need to use the following IP Address; A. http://127.0.0.1 B. http://192.168.0.254 C. http://10.0.0.1 D. http://169.254.169.254

D. http://169.254.169.254

Fill in the blanks: The base URI for all requests for instance metadata is _____ A. http://254.169.169.254/latest/ B. http://169.169.254.254/latest/ C. http://127.0.0.1/latest/ D. http://169.254.169.254/latest/

D. http://169.254.169.254/latest/

In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics: A. web server visible metrics such as number failed transaction requests B. operating system visible metrics such as memory utilization C. database visible metrics such as number of connections D. hypervisor visible metrics such as CPU utilization

D. hypervisor visible metrics such as CPU utilization, disk I/O, network I/O

A _____ is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources. A. user B. AWS Account C. resource D. permission

D. permission

When you run a DB Instance as a Multi-AZ deployment, the _____ serves database writes and reads A. secondary B. backup C. stand by D. primary

D. primary

You can use _____ and _____ to help secure the instances in your VPC. A. security groups and multi-factor authentication B. security groups and 2-Factor authentication C. security groups and biometric authentication D. security groups and network ACLs

D. security groups and network ACLs

*Which of the following AWS services is a non-relational database?* * Redshift * Elasticache * RDS * DynamoDB

DynamoDB

Amazon's Redshift uses which block size for its columnar storage? A. 2KB B. 8KB C. 16KB D. 32KB E. 1024KB / 1MB

E. 1024KB / 1MB

You need to add a route to your routing table in order to allow connections to the internet from your subnet. What route should you add? A. Destination: 192.168.1.258/0 --> Target: your Internet gateway B. Destination: 0.0.0.0/33 --> Target: your virtual private gateway C. Destination: 0.0.0.0/0 --> Target: 0.0.0.0/24 D. Destination: 10.0.0.0/32 --> Target: your virtual private gateway E. Destination: 0.0.0.0/0 --> Target: your Internet gateway

E. Destination: 0.0.0.0/0 --> Target: your Internet gateway

Which of the following is not supported by AWS Import/Export? A. Import to Amazon S3 B. Export from Amazon S3 C. Import to Amazon EBS D. Import to Amazon Glacier E. Export from Amazon Glacier

E. Export from Amazon Glacier

Which of the following is not supported by AWS Import/Export? A. Import to Amazon S3 B. Export from Amazon S3 C. Import to Amazon EBS D. Import to Amazon Glacier E. Export to Amazon Glacier

E. Export to Amazon Glacier

How can you secure data at rest on an EBS volume? A. Attach the volume to an instance using EC2's SSL interface. B. Write the data randomly instead of sequentially. C. Encrypt the volume using the S3 server-side encryption service. D. Create an IAM policy that restricts read and write access to the volume. E. Use an encrypted file system on top of the EBS volume.

E. Use an encrypted file system on top of the EBS volume.

*If I wanted to run a database on an EC2 instance, which of the following storage options would Amazon recommend?* * S3 * EBS * Glacier * RDS

EBS

*Which of the following statements are true for an EBS volumes?* (Choose two.) * EBS replicates within its availability zones to protect your applications from component failure. * EBS replicates across different availability zones to protect your applications from component failure. * EBS replicates across different regions to protect your applications from component failure. * Amazon EBS volumes provide 99.999 percent availability

EBS replicates within its availability zones to protect your applications from component failure. Amazon EBS volumes provide 99.999 percent availability. * EBS cannot replicate to a different AZ or region.

*To protect S3 data from both accidental deletion and accidental overwriting, you should:* * Enable S3 versioning on the bucket * Access S3 data using only signed URLs * Disable S3 delete using an IAM bucket policy * Enable S3 reduced redundancy storage * Enable multifactor authentication (MFA) protected access

Enable S3 versioning on the bucket * Signed URLs won't help, even if you disable the ability to delete.

*If you want your application to check RDS for an error, have it look for an ______ node in the response from the Amazon RDS API.* * Abort * Incorrect * Error * Exit

Error

True or False: Common points of failures like generators and cooling equipment are shared across Availability Zones.

False

*When you add a rule to an RDS DB security group, you must specify a port number or protocol.* * True * False

False * Technically a destination port number is needed, however with a DB security group the RDS instance port number is automatically applied to the RDS DB Security Group.

*What happens to the I/O operations of a single-AZ RDS instance during a database snapshot or backup?* * I/O operations to the database are sent to a Secondary instance of a Multi-AZ installation (for the duration of the snapshot.) * I/O operations will function normally. * Nothing. * I/O may be briefly suspended while the backup process initializes (typically under a few seconds), and you may experience a brief period of elevated latency.

I/O may be briefly suspended while the backup process initializes (typically under a few seconds), and you may experience a brief period of elevated latency.

*Under what circumstances would I choose provisioned IOPS over standard storage when creating an RDS instance?* * If you use online transaction processing in your production environment. * If your business was trying to save money. * If this was a test DB. * If you have workloads that are not sensitive to latency/lag.

If you use online transaction processing in your production environment

*In RDS, changes to the backup window take effect ________.* * The next day * You cannot back up in RDS. * After 30 minutes * Immediately

Immediately

*Amazon Glacier is designed for which of the following?* (Choose two.) * Active database storage * Infrequently accessed data * Data archives * Frequently accessed data * Cached session data

Infrequently accessed data, Data archives * Amazon Glacier is used for archival storage and for archival purposes.

What Amazon Elastic Compute Cloud feature can you query from within the instance to access instance properties?

Instance metadata

*How do you protect access to and the use of the AWS account's root user credentials?* (Choose two.) * Never use the root user * Use Multi-Factor Authentication (MFA) along with the root user * Use the root user only for important operations * Lock the root user

Never use the root user. Use Multi-Factor Authentication (MFA) along with the root user * It is critical to keep the root user's credentials protected, and to this end, AWS recommends attaching MFA to the root user and locking the credentials with the MFA in a physically secured location. IAM allows you to create and manage other nonroot user permissions, as well as establishe access levels to the resources.

Can I move a Reserved Instance from one Region to another? A. No B. Only if they are moving into GovCloud C. Yes D. Only if they are moving to US East from another region

No

If an Amazon EBS volume is the root device of an instance, can I detach it without stopping the instance?

No

*An instance is launched into the public subnet of a VPC. Which of the following must be done for it to be accessible from the Internet?* * Attached an elastic IP to the instance. * Nothing. The instance is accesible from the Internet. * Launch a NAT gateway and route all traffic to it. * Make an entry in the route table, passing all traffic going outside the VPC in the NAT instance.

Nothing. The instance is accesible from the Internet. * Since the instance is created in the public subnet and an Internet gateway is already attached with a public subnet, you don't have to do anything explicitly.

*Which AWS service is ideal for Business Intelligence Tools/Data Warehousing?* * Elastic Beanstalk * ElastiCache * DynamoDB * Redshift

Redshift

*Which of the following is most suitable for OLAP?* * Redshift * DynamoDB * RDS * ElastiCache

Redshift * Redshift would be the most suitable for online analytics processing.

*Which of the following is not a feature of DynamoDB?* * The ability to store relational based data * The ability to perform operations by using a user-defined primary key * The primary key can either be a single-attribute or a composite * Data reads that are either eventually consistent or strongly consistent

The ability to store relational based data * DynamoDB is the AWS managed NoSQL database service. It has many features that are being added to constantly, but it is not an RDBMS service and therefore it will never have the ability to store relational data. All of the other options listed are valid features of DynamoDB.

*Which of the following will occur when an EC2 instance in a VPC with an associated elastic IP is stopped and started?* (Choose two.) * The elastic IP will be dissociated from the instance. * All data on instance-store devices will be lost. * All data on Elastic Block Store (EBS) devices will be lost. * The Elastic Network Interface (ENI) is detached. * The underlying host for the instance is changed.

The elastic IP will be dissociated from the instance. The Elastic Network Interface (ENI) is detached. * If you have any data in the instance store, that will be also be lost, but you should not choose this option since the question is regarding elastic IP.

*What data transfer charge is incurred when replicating data from your primary RDS instance to your secondary RDS instance?* * There is no charge associated with this action. * The charge is half of the standard data transfer charge. * The charge is the same as the standard data transfer charge. * The charge is double the standard data transfer charge

There is no charge associated with this action.

*To execute code in AWS Lambda, what is the size the EC2 instance you need to provision in the back end?* * For code running less than one minute, use a T2 Micro. * For code running between one minute and three minutes, use M2. * For code running between three minutes and five minutes, use M2 large. * There is no need to provision an EC2 instance on the back end.

There is no need to provision an EC2 instance on the back end. * There is no need to provision EC2 servers since Lambda is serverless.

*When creating an RDS instance, you can select the Availability Zone into which you deploy it.* * False * True

True

*What AWS service can you use to manage multiple accounts?* * Use QuickSight * Use Organization * Use IAM * Use roles

Use Organization * QuickSight is used for visualization. * IAM can be leveraged within accounts, and roles are also within accounts.

*If you want to provision your infrastructure in a different region, what is the quickest way to mimic your current infrastructure in a different region?* * Use a CloudFormation template * Make a blueprint of the current infrastructure and provision the same manually in the other region * Use CodeDeploy to deploy the code to the new region * Use the VPC Wizard to lay down your infrastructure in a different region

Use a CloudFormation template * Creating a blueprint and working backward from there is going to be too much effort. Why you would do that when CloudFormation can do it for you? CodeDeploy is used for deploying code, and the VPC Wizard is used to create VPCs.

*You just deployed a three-tier architecture in AWS. The web tier is a public subnet, and the application and database tiers are in a private subnet. You need to download some OS updates for the application. You want a permanent solution for this, which at the same time should be highly available. What is the best way to achieve this?* * Use an Internet gateway * Use a NAT gateway * Use a NAT instance * Use a VPC endpoint

Use a NAT gateway * A NAT gateway provides high availability. * A, C and D are incorrect. A NAT instance doesn't provide high availability. If you want to get high availability from a NAT instance you need multiple NAT instances, which adds to the cost. An Internet gateway is already attached to a public subnet; if you attached an Internet gateway to a private subnet, it no longer remains a private subnet. Finally, a VPC endpoint provides private connectivity from an AWS service to Amazon VPC.

*What is an important criterion when planning your network topology in AWS?* * Use both IPv4 and IPv6 addresses. * Use nonoverlapping IP addresses. * You should have the same IP address that you have on-premise. * Reserve as many EIP addresses as you can since IPv4 IP addresses are limited.

Use nonoverlapping IP addresses. * Using IPv4 or IPv6 depends on what you are trying to do. You can't have the same IP address, or when you integrate the application on-premise with the cloud, you will end up with overlapping IP addresses, and hence your application in the cloud won't be able to talk with the on-premise application. You should allocate only the number of EIPs you need. If you don't use an EIP and allocate it, you are going to incur a charge on it.

*Which set of RDS database engines is currently available?* * MariaDB, SQL Server, MySQL, Cassandra * Oracle, SQL Server, MySQL, PostgreSQL * PostgreSQL, MariaDB, MongoDB, Aurora * Aurora, MySQL, SQL Server, Cassandra

Oracle, SQL Server, MySQL, PostgreSQL

In order to optimize performance for a compute cluster that requires low inter-node latency, what feature should you use?

Placement Groups

Which of the following DynamoDB features are chargeable, when using a single region? (Choose 2) * Incoming Data Transfer * Read and Write Capacity * The number of tables created * Storage of Data

Read and Write Capacity, Storage of Data * There will always be a charge for provisioning read and write capacity and the storage of data within DynamoDB, therefore these two answers are correct. There is no charge for the transfer of data into DynamoDB, providing you stay within a single region (if you cross regions, you will be charged at both ends of the transfer.) There is no charge for the actual number of tables you can create in DynamoDB, providing the RCU and WCU are set to 0, however in practice you cannot set this to anything less than 1 so there always be a nominal fee associated with each table.

*Your web application front end consists of multiple EC2 instance behind an elastic load balancer. You configured an elastic load balancer to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?* * The instance is replaced automatically by the elastic load balancer. * The instance gets terminated automatically by the elastic load balancer. * The ELB stops sending traffic to the instance that failed its health check. * The instance gets quarantined by the elastic load balancer for root-cause analysis.

The ELB stops sending traffic to the instance that failed its health check.


Set pelajaran terkait

Psych Therapeutic Relationships Prep U

View Set

Patho Test 7 practice Q's Endocrine

View Set

mgmt practice quiz/quiz questions

View Set

MGT 491 EXAM 2 CH 7&8-BSU FLORES

View Set

Cell Biology Chapter 11 Membrane Structure

View Set

Ucertify Chapters 1, 2, 3, 4, 5, 6, 7 ,8, 9, 10, 11, 12, 13, 14, 15, 16 test

View Set

Saunders-Pharmacology: Respiratory

View Set

Chapter 9: Business Intelligence Systems

View Set