AWS Solutions Architect

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

How long can messages live in a SQS queue ? A. 12 hours B. 10 days C. 14 days D. 1 year

C. 14 days

Does S3 provide read-after-write consistency for new objects? Choose the correct answer from the options below A. Yes, for all regions B. No, not for any region C. Yes, but only for certain regions and for new objects D. Yes, but only for certain regions, not the us-standard region

A

Your organization has been using a HSM (Hardware Security Module) for secure key storage. It is only used for generating keys for your EC2 instances. Unfortunately, the HSM has been zeroized after someone attempted to log in as the administrator three times using an invalid password. This means that the encryption keys on it have been wiped. You did not have a copy of the keys stored anywhere else. How can you obtain a new copy of the keys that you had stored on HSM? Choose the correct answer from the options below A. You cannot; the keys are lost if you did not have a copy. B. Contact AWS Support; your incident will be routed to the team that supports AWS CloudHSM and a copy of the keys will be sent to you after verification C. Restore a snapshot of the HSM D. You can still connect via CLI; use the command 'get-client-configuration' and you can get a copy of the keys

A

You are defined the following Network ACL for your subnet: Rule: 100-ALL TRAFFIC-ALL PROTOCOL-ALL PORTS-Source-0.0.0.0/ 0-ALLOW, Rule 101-Custom TCP Rule-TCP( 6) PROTOCOL-3000 PORT-Source: 54.12.34.34/ 32-DENY & Rule * -ALL TRAFFIC-ALL PROTOCOL-ALL PORTS-Source: 0.0.0.0/ 0-DENY. What will be the outcome when a workstation of IP 54.12.34.34 tries to access your subnet A. The request will be allowed B. The request will be denied C. The request will be allowed initially and then denied D. The request will be denied initially and then allowed

A The following are the parts of a network ACL rule: Rule number. Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it's applied regardless of any higher-numbered rule that may contradict it. Now since the first rule number is 100 and allows all traffic, no matter what rule you put after that all traffic will be allowed. Hence, all options except A are incorrect For more information on Network ACL, please refer to the below URL: http:// docs.aws.amazon.com/ AmazonVPC/ latest/ UserGuide/ VPC_ACLs.html

How many types of block devices does Amazon EC2 support? Choose one answer from the options below A. 2 B. 3 C. 4 D. 1

A A block device is a storage device that moves data in sequences of bytes or bits (blocks). These devices support random access and generally use buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer. Amazon EC2 supports two types of block devices: Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance) EBS volumes (remote storage devices)

What features in aws acts as a firewall that controls the traffic allowed to reach one or more instances ? A. Security group B. ACL C. IAM D. Private IP Addresses

A A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. Below is an example of a security group for EC2 instances that allows inbound rules and ensure there is a rule for TCP on port 22.

You are working for a startup company that is building an application that receives large amounts of data. Unfortunately, current funding has left the start-up short on cash, cannot afford to purchase thousands of dollars of storage hardware, and has opted to use AWS. Which services would you implement in order to store a virtually unlimited amount of data without any effort to scale when demand unexpectedly increases? Choose the correct answer from the options below A. Amazon S3, because it provides unlimited amounts of storage data, scales automatically, is highly available, and durable B. Amazon Glacier, to keep costs low for storage and scale infinitely C. Amazon Import/Export, because Amazon assists in migrating large amounts of data to Amazon S3 D. Amazon EC2, because EBS volumes can scale to hold any amount of data and, when used with Auto Scaling, can be designed for fault tolerance and high availability

A The best option is to use S3 because you can host a large amount of data in S3 and is the best storage option provided by AWS. The answer could be Glacier if question is just asking to choose the cheapest option to store a large amount of data , but here trick is in question where it mentioned to scale when "demand unexpectedly increase". As Galicer required 3 to 5 hrs duration to get data , so it will not able to handle unexpected demand increase thus S3 is the best choice here.

Which of the following are Invalid VPC peering configurations? Choose 3 answers from the options below A. Overlapping CIDR blocks B. Transitive Peering C. Edge to Edge routing via a gateway D. One to one relationship between 2 VPC's

A B C

API Access Keys are required to make programmatic call to AWS from which of the following? Choose the 3 correct answers from the options below A. AWS Tools for Windows PowerShell B. Managing AWS resources through the AWS console C. Direct HTTP call using the API D. AWS CLI

A C D By default, when you create an access key, its status is Active, which means the user can use the access key for AWS CLI, Tools for Windows PowerShell, and API calls. Each user can have two active access keys, which is useful when you must rotate the user's access keys. You can disable a user's access key, which means it can't be used for API calls. You might do this while you're rotating keys or to revoke API access for a user

As an AWS administrator you are trying to convince a team to use RDS Read Replica's. What are two benefits of using read replicas? Choose the 2 correct answers from the options below A. Creates elasticity in RDS B. Allows both reads and writes C. Improves performance of the primary database by taking workload from it D. Automatic failover in the case of Availability Zone service failures

A, C By creating a read replica RDS, you have the facility to scale out the reads for your application, hence increasing the elasticity for your application. Also it can be used to reduce the load on the main database. Read Replica's don't provide write operations, hence option B is wrong. And Multi-AZ is used for failover so Option D is wrong.

To maintain compliance with HIPPA laws, all data being backed up or stored on Amazon S3 needs to be encrypted at rest. What is the best method for encryption for your data, assuming S3 is being used for storing the healthcare-related data? A. Enable SSE on an S3 bucket to make use of AES-256 encryption B. Store the data in encrypted EBS snapshots C. Encrypt the data locally using your own encryption keys, then copy the data to Amazon S3 over HTTPS endpoints D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3

A, C Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options of protecting data at rest in Amazon S3. Use Server-Side Encryption - You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects. Use Client-Side Encryption - You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

What are the languages currently supported by AWS Lamda? Choose 3 answers from the options given below A. Node.js B. Angular JS C. Java D. Python

A, c, d AWS Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (using the .NET Core runtime).

In AWS Security Groups what are the 2 types of rules you can define? Select 2 options. A. Inbound B. Transitional C. Bi-Directional D. Outbound

A, d A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. The below diagram's show that rules can be defined for Inbound and Outbound

Which of the following benefits does adding Multi-AZ deployment in RDS provide? Choose 2 answers from the options given below A. MultiAZ deployed database can tolerate an Availability Zone failure B. Decrease latencies if app servers accessing database are in multiple Availability zones C. Make database access times faster for all app servers D. Make database more available during maintenance tasks

A, d Some of the advantages of Multi AZ rds deployments are given below If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby

You have a requirement to create a subnet in an AWS VPC which will host around 20 hosts. This subnet will be used to host web servers. Which of the below could be the possible CIDR block allocated for the subnet A. 10.0.1.0/ 27 B. 10.0.1.0/ 28 C. 10.0.1.0/ 29 D. 10.0.1.0/ 30

A. 10.0.1.0/ 27 With this configuration you can have 27 allowable hosts which fits the requirement. Option B is invalid because you can have only a maximum of 16 hosts with this configuration Option C and D are invalid because you can assign a single CIDR block to a VPC. The allowed block size is between a /16 netmask and /28 netmask.

In a VPC, you have launched two web servers and attached to an internet facing ELB. Both your web servers and ELB are located in the public subnet. Yet, you are still not able to access your web application via the ELB's DNS through the internet. What could be done to resolve this issue? A. Attach an Internet gateway to the VPC and route it to the subnet B. Add an elastic IP address to the instance C. Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet D. Recreate the instances again

A. Attach an Internet gateway to the VPC and route it to the subnet You need to ensure that the VPC has an internet gateway attached and the route table properly configured for the subnet. Option B is invalid because even the ELB is not accessible from the internet. Option C is invalid because the instances and ELB is not reachable via internet if no internet gateway is attached to the VPC. Option D is invalid because this will not have an impact on the issue.

Which of the following container technologies are currently supported by the AWS ECS service? Choose 2 answers. A. Kubernetes B. Docker C. Mesosphere D. Canonical LXD

A. Kubernetes B. Docker

What are the 2 main components of AutoScaling? Select 2 options. A. Launch Configuration B. Cloudtrail C. Cloudwatch D. AutoScaling Groups

A. Launch Configuration D. AutoScaling Groups Groups -Your EC2 instances are organized into groups so that they can be treated as a logical unit for the purposes of scaling and management. When you create a group, you can specify its minimum, maximum, and, desired number of EC2 instances. Launch configurations -Your group uses a launch configuration as a template for its EC2 instances. When you create a launch configuration, you can specify information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances.

You have just provisioned a fleet of EC2 instances and realized that none of them have a public IP address. What settings would need to be changed for the next fleet of instances to be created with public IP addresses? A. Modify the auto-assign public IP setting on the subnet. B. Modify the auto-assign public IP setting on the instance type. C. Modify the auto-assign public IP setting on the route table. D. Modify the auto-assign public IP setting on the VPC.

A. Modify the auto-assign public IP setting on the subnet. This setting is done at the subnet level and if marked as true, all instances launched in that subnet will get a public IP address by default.

You want to retrieve the Public IP addresses assigned to a running instance via the Instance metadata. Which of the below urls is valid for retrieving this data. A. http:// 169.254.169.254/ latest/ meta-data/ public-ipv4 B. http:// 254.169.254.169/ latest/ meta-data/ public-ipv4 C. http:// 254.169.254.169/ meta-data/ latest/ public-ipv4 D. http:// 169.254.169.254/ meta-data/ latest/ public-ipv4

A. http:// 169.254.169.254/ latest/ meta-data/ public-ipv4

The common use for IAM is to manage what? Select 3 options. A. Security Groups B. API Keys C. Multi-Factor Authentication D. Roles

B, C, D You can use IAM to manage API key and MFA along with roles.

Which is the service provided by AWS for providing a petabyte-scale data warehouse ? A. Amazon DynamoDB B. Amazon Redshift C. Amazon Kinesis D. Amazon Simple Queue Service

B. Amazon Redshift Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools. Start small for $ 0.25 per hour with no commitments and scale to petabytes for $ 1,000 per terabyte per year, less than a tenth the cost of traditional solutions. Option A is wrong because it is used as a NOSQL solution. Option C is wrong because it is used for processing streams and not for storage. Option D is wrong because it is a de-coupling solution.

Which of the following services provides edge locations that can be used to cache frequently accessed pages of a web application ? A. SQS B. Cloudfront C. Subnets D. EC2

B. Cloudfront

When designing a health check for your web application which is hosted behind an elastic load balancer, which of the following health checks is ideal to implement A. A TCP health check B. A UDP health check C. A HTTP health check D. A combination of TCP and UDP health checks

C. A HTTP health check Option B and D is invalid because UDP health checks are not possible Option A is partially valid. A simple TCP health check would not detect the scenario where the instance itself is healthy, but the web server process has crashed. Instead, you should assess whether the web server can return a HTTP 200 response for some simple request.

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? A. Multiple Amazon EBS volume with snapshots B. A single Amazon Glacier vault C. A single Amazon S3 bucket D. Multiple instance stores

C. A single Amazon S3 bucket The AWS Simple Storage service is the best option for this scenario. The AWS documentation provides the following information on the Simple Storage service Amazon S3 is object storage built to store and retrieve any amount of data from anywhere -web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry

What is the best way to move an EBS volume currently attached to an EC2 instance from one availability zone to another ? A. Detach the volume and attach to an EC2 instance in another AZ. B. Create a new volume in the other AZ and specify the current volume as the source. C. Create a snapshot of the volume and then create a volume from the snapshot in the other AZ D. Create a new volume in the AZ and do a disk copy of contents from one volume to another.

C. Create a snapshot of the volume and then create a volume from the snapshot in the other AZ In order for a volume to be available in another availability zone, you need to first create a snapshot from the volume. Then in the snapshot from creating a volume from the snapshot , you can then specify the new availability zone accordingly. Option A is invalid, because the Instance and Volume have to be in the same AZ in order for it to be attached to the instance Option B is invalid , because there is no way to specify a volume as a source Option D is invalid , because the Diskcopy would just be a tedious process.

In AWS what is used for encrypting and decrypting login information to EC2 instances? A. Templates B. AMI's C. Key pairs D. None of the above

C. Key pairs

You have created your own VPC and subnet in AWS. You have launched an instance in that subnet. You have noticed that the instance is not receiving a DNS name. Which of the below options could be a valid reason for this issue. A. The CIDR block for the VPC is invalid B. The CIDR block for the subnet is invalid C. The VPC configuration needs to be changed. D. The subnet configuration needs to be changed.

C. The VPC configuration needs to be changed. If the DNS hostnames option of the VPC is not set to 'Yes' then the instances launched in the subnet will not get DNS Names. You can change the option by choosing your VPC and clicking on 'Edit DNS Hostnames' Option A and B are invalid because if the CIDR blocks were invalid then the VPC or subnet would not be created. Option D is invalid because the subnet configuration does not have the effect on the DNS hostnames.

What is the best definition of an SQS message? Choose an answer from the options below A. A mobile push notification B. A set of instructions stored in an SQS queue that can be up to 512KB in size C. A notification sent via SNS D. A set of instructions stored in an SQS queue that can be up to 256KB in size

D The maximum size of an SQS message as given in the AWS documentation is given below

Which of the following are best practices for monitoring your EC2 Instances A. Create and implement a monitoring plan that collects monitoring data from all of the parts in your AWS solution B. Automate monitoring tasks as much as possible C. Check the log files on your EC2 instances D. All of the above

D Use the following best practices for monitoring to help you with your Amazon EC2 monitoring tasks. Make monitoring a priority to head off small problems before they become big ones. Create and implement a monitoring plan that collects monitoring data from all of the parts in your AWS solution so that you can more easily debug a multi-point failure if one occurs. Your monitoring plan should address, at a minimum, the following questions: What are your goals for monitoring? What resources you will monitor? How often you will monitor these resources? What monitoring tools will you use? Who will perform the monitoring tasks? Who should be notified when something goes wrong? Automate monitoring tasks as much as possible. Check the log files on your EC2 instances.

A company has assigned two web server instances in a VPC subnet to an Elastic Load Balancer (ELB). However, the instances and the ELB are not reachable via URL to the Elastic Load Balancer (ELB). How can you resolve the issue so that your web server instances can start serving the web app data to the public Internet? Choose the correct answer from the options given below A. Attach an Internet gateway to the VPC and route it to the subnet B. Add an elastic IP address to the instance C. Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet D. None of the above

A If the Internet gateway is not attached to the VPC, which is a pre-requisite for the instances to be accessed from the internet then the instances will not be reachable. You can assign instance from private subnet to ELB, in that case, ELB will automatically become internal ELB and AWS will assign scheme as "Internal" .If your subnet is public then ELB will automatically become external ELB and AWS will assign scheme as "Internet-facing". You can add Internet Gateway to VPC and add IGW route in the subnet to make it available over the internet, however, in that case, AWS will still show ELB scheme as internal but it will allow internet traffic to the instance.

What is the durability of S3 RRS? A. 99.99% B. 99.95% C. 99.995% D. 99.999999999%

A RRS only has 99.99% durability and there is a chance that data can be lost. So you need to ensure you have the right steps in place to replace lost objects.

Which of the following is mandatory when defining a cloudformation template? A. Resources B. Parameters C. Outputs D. Mappings

A. Resources

Which of the following best describes the main feature of an Elastic Load Balancer (ELB) in AWS? A. To evenly distribute traffic among multiple EC2 instances in separate Availability Zones. B. To evenly distribute traffic among multiple EC2 instances in a single Availability Zones. C. To evenly distribute traffic among multiple EC2 instances in a multiple regions. D. To evenly distribute traffic among multiple EC2 instances in a multiple counties.

A. To evenly distribute traffic among multiple EC2 instances in separate Availability Zones. Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to route application traffic. The ELB is best used for EC2 instances multiple AZ's. You cannot use ELB for distributing traffic across regions.

In IAM, what is the representation of a person or service ? A. User B. Group C. Team D. Role

A. User

When you disable automated backups for aws rds, what are you compromising on? Choose on answer from the options given below A. Nothing, you are actually saving resources on aws B. You are disabling the point-in-time recovery. C. Nothing really, you can still take manual backups. D. You cannot disable automated backups in RDS.

B Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. You can set the backup retention period when you create a DB instance. If you don't set the backup retention period, Amazon RDS uses a default period retention period of one day. You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days You will also specifically see AWS mentioning the risk of not allowing automated backups.

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security? A. Save the API credentials to your php files. B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. C. Save your API credentials in a public Github repository. D. Pass API credentials to the instance using instance userdata.

B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. Always use IAM Roles for accessing AWS resources from EC2 Instances The AWS Documentation mentions the following IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles

A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated? A. A single facility in eu-west-1 and a single facility in eu-central-1 B. A single facility in eu-west-1 and a single facility in us-east-1 C. Multiple facilities in eu-west-1 D. A single facility in eu-west-1

C

What service from AWS can help manage the budgets for all resources in AWS? Choose one answer from the options below A. Cost Explorer B. Cost Allocation Tags C. AWS Budgets D. Payment History

C A budget is a way to plan your usage and your costs (also known as spend data), and to track how close your usage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you with a quick way to see your usage-to-date and current estimated charges from AWS, and to see how much your predicted usage accrues in charges by the end of the month. Budgets also compare the current estimated usage and charges to the amount that you indicated that you want to use or spend, and lets you see how much of your budget has been used. AWS updates your budget status several times a day. Budgets track your unblended costs, subscriptions, and refunds. You can create budgets for different types of usage and different types of cost. For example, you can create a budget to see how many EC2 hours you have used, or how many GB you have stored in an S3 bucket. You can also create a budget to see how much you are spending on a particular service, or how often you call a particular API operation. Budgets use the same data filters as Cost Explorer. To create your budget, you can perform the below steps Step 1) Go to your billing section, go to Budgets and create a new Budget Step 2) In the next screen, you can then mention the budget amount and what services to link the budget to.

Which of the below resources cannot be tagged in AWS A. Images B. EBS Volumes C. VPC endpoint D. VPC

C Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type —you can quickly identify a specific resource based on the tags you've assigned to it. Each tag consists of a key and an optional value, both of which you define. But you cannot tag a VPC endpoint

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? A. Multiple Amazon EBS volume with snapshots B. A single Amazon Glacier vault C. A single Amazon S3 bucket D. Multiple instance stores

C. A single Amazon S3 bucket The AWS Simple Storage service is the best option for this scenario. The AWS documentation provides the following information on the Simple Storage service Amazon S3 is object storage built to store and retrieve any amount of data from anywhere -web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry

In Cloudtrail, where does it store all of the logs that it creates? Choose one answer from the options given below. A. A separate EC2 instance with EBS storage B. A RDS instance C. A DynamoDB instance D. Amazon S3

D When you enable Cloudtrail, you need to provide an S3 bucket where all the logs can be written to.

Which of the below features allows you to take backups of your EBS volumes? Choose one answer from the options given below. A. Volumes B. State Manager C. Placement Groups D. Snapshots

D You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

Which of the following verbs are supported with the API Gateway A. GET B. POST C. PUT D. All of the above

D. All of the above

What type of monitoring for EBS volumes is available automatically in 5 minute periods at no charge? A. Basic B. Primary C. Detailed D. Local

A

What are the two layers of security provided by AWS in the VPC? A. Security Groups and NACLs B. NACLs and DHCP Options C. Route Tables and Internet gateway D. None of the above

A. Security Groups and NACLs

In the event of an unplanned outage of your primary DB, AWS RDS automatically switches over to the secondary. In such a case which record in Route 53 is changed? Select one answer from the options given below A. DNAME B. CNAME C. TXT D. MX

B

In order to establish a successful site-to-site VPN connection from your on-premise network to the VPC (Virtual Private Cloud), which of the following needs to be configured outside of the VPC? Choose the correct answer from the options below A. The main route table to route traffic through a NAT instance B. A public IP address on the customer gateway for the on-premise network C. A dedicated NAT instance in a public subnet D. An Elastic IP address to the Virtual Private Gateway

B On the customer side gateway you need to have a public IP address which can be addressed by the VPN connection.

Where does AWS beanstalk store the application files and server log files? Choose one answer from the options given below A. On the local server within Elastic beanstalk B. AWS S3 C. AWS Cloudtrail D. AWS DynamoDB

B AWS Elastic Beanstalk stores your application files and, optionally, server log files in Amazon S3. If you are using the AWS Management Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account for you and the files you upload will be automatically copied from your local client to Amazon S3. Optionally, you may configure Elastic Beanstalk to copy your server log files every hour to Amazon S3. You do this by editing the environment configuration settings

What is the amount of temp space is allocated to you when using Lambda functions per invocation. A. 256 MB B. 512 MB C. 2 GiB D. 16 GiB

B. 512 MB

What are two primary requirements of a NAT Instance? Choose the correct answer from the options below: A. A NAT instance must be provisioned into a private subnet, and it must part of the private subnet's route table. B. A NAT instance must be provisioned into a public subnet, and it must part of the private subnet's route table. C. A NAT instance must be provisioned into a private subnet, and does not require a public IP address. D. A NAT instance must be provisioned into a public subnet, and must be combined with a bastion host.

B. A NAT instance must be provisioned into a public subnet, and it must part of the private subnet's route table. The snapshot from the AWS documentation shows how the NAT instance is setup. It needs to be placed in the public subnet and the private subnet should have a route to it.

For which of the following databases does Amazon RDS provides high availability and failover support using Amazon's failover technology for DB instances using Multi-AZ deployments. Select 3 options. A. SQL Server B. MySQL C. Oracle D. MariaDB

B. MySQL C. Oracle D. MariaDB

What are some of the common causes why you cannot connect to a DB instance on AWS ? Select 3 options. A. There is a read replica being created, hence you cannot connect B. The DB is still being created C. The local firewall is stopping the communication traffic D. The security groups for the DB are not properly configured.

B. The DB is still being created C. The local firewall is stopping the communication traffic D. The security groups for the DB are not properly configured.

You are trying to configure Cross Region Replication for your S3 bucket. But you are not able to select the option of Cross Region Replication and is disabled. Which of the below could be the possible reasons for this ? A. The feature is not available in that region B. You need to enable versioning on the bucket C. The source region is currently down D. The destination region is currently down

B. You need to enable versioning on the bucket "Requirements for cross-region replication: The source and destination buckets must be versioning-enabled. The source and destination buckets must be in different AWS regions. You can replicate objects from a source bucket to only one destination bucket. Amazon S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf. If the source bucket owner also owns the object, the bucket owner has full permissions to replicate the object. If not, the source bucket owner must have permission for the Amazon S3 actions s3: GetObjectVersion and s3: GetObjectVersionACL to read the object and object ACL. If you are setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket. The destination bucket owner needs to grant these permissions via a bucket policy. Option A is invalid , because it is available in all regions Option C is invalid because if so, then you would not be able to access S3 in that region Option D is invalid because you have not reached the configuration stage to select the destination bucket

Which aws service is used as a global content delivery network (CDN) service in aws? A. Amazon SES B. Amazon Cloudtrail C. Amazon CloudFront D. Amazon S3

C Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.

What are the services from AWS helps to migrate databases to AWS easily ? A. AWS Snowball B. AWS Direct Connect C. AWS Database Migration Service (DMS) D. None of the above

C. AWS Database Migration Service (DMS) AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

What is the service used by AWS to segregate control over the various AWS services ? A. AWS RDS B. AWS Integrity Management C. AWS Identity and Access Management D. Amazon EMR

C. AWS Identity and Access Management

You work for a market analysis firm who are designing a new environment. They will ingest large amounts of market data via Kinesis and then analyze this data using Elastic Map Reduce. The data is then imported in to a high performance NoSQL Cassandra database which will run on EC2 and then be accessed by traders from around the world. The database volume itself will sit on 2 EBS volumes that will be grouped into a RAID 0 volume. They are expecting very high demand during peak times, with an IOPS performance level of approximately 15,000. Which EBS volume should you recommend? A. Magnetic B. General Purpose SSD C. Provisioned IOPS (PIOPS) D. Turbo IOPS (TIOPS)

C. Provisioned IOPS (PIOPS) When you are looking at hosting intensive I/ O applications such as databases, always look to using IOPS as the preferred storage option.

A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named "company-backup"? A. A custom bucket policy limited to the Amazon S3 API in the Amazon Glacier archive "company-backup" B. A custom bucket policy limited to the Amazon S3 API in "company-backup" C. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive "company-backup". D. A custom IAM user policy limited to the Amazon S3 API in "company-backup".

D You can use IAM user policies and attach them to users/ groups that need specific access to S3 buckets.

For DynamoDB, what are the scenario's in which you would want to enable Cross-region replication? A. Live data migration B. Easier Traffic management C. Disaster Recovery D. All of the above

D. All of the above

Which of the below elements can you manage in the IAM dashboard? Choose 3 answers from the options given below A. Users B. Encryption Keys C. Cost Allocation Reports D. Policies

A, b, d

In VPCs with private and public subnets, database servers should ideally be launched into: A. The public subnet B. The private subnet C. Either of them D. Not recommended, they should ideally be launched outside VPC

B Normally database servers should not be exposed to the internet and should reside in private subnets. The web servers will be part of the public subnet and exposed to the end users.

You've been tasked with building out a duplicate environment in another region for disaster recovery purposes. Part of your environment relies on EC2 instances with preconfigured software. What steps would you take to configure the instances in another region? Choose the correct answer from the options below A. Create an AMI of the EC2 instance B. Create an AMI of the EC2 instance and copy the AMI to the desired region C. Make the EC2 instance shareable among other regions through IAM permissions D. None of the above

B You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs.

You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP however when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect? A. Straight away but to the new instances only. B. Immediately. C. After a few minutes this should take effect. D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.

B. Immediately. "When you make a change to the Security Groups or Network ACL's , they are applied immediately.

You are creating a number of EBS Volumes for your EC2 instances. You are concerned on the backups of the EBS Volumes. Which of the below is a way to backup the EBS Volumes A. Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway B. Write a cronjob that uses the AWS CLI to take a snapshot of production EBS volumes. C. Use a lifecycle policy to back up EBS volumes stored on Amazon S3 for durability D. Write a cronjob on the server that compresses the data and then copy it to Glacier

B. Write a cronjob that uses the AWS CLI to take a snapshot of production EBS volumes. A point-in-time snapshot of an EBS volume, can be used as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental—only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the entire volume. You can create a snapshot via the CLI command -create-snapshot Option A is incorrect because you normally use the Storage gateway to backup your on-premise data. Option C is incorrect because this is used for S3 storage Option D is incorrect because compression is another maintenance task and storing it in Glacier is not an ideal option

What is the name of the VPC that is automatically created for your AWS account for the first time ? A. Primary VPC B. First VPC C. Default VPC D. Initial VPC

C. Default VPC

Your company has petabytes of data that it wants to move from their on-premise location to AWS. Which of the following can be used to fulfil this requirement? A. AWS VPN B. AWS Migration C. AWS VPC D. AWS Snowball

D. AWS Snowball Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

A company has EC2 instances running in AWS. The EC2 instances are running via an Autoscaling solution. There is a lot of application requests or work items being lost because of the load on the servers. The Autoscaling solution is launching new instances to take the load but there are still some application requests which are being lost. Which of the following is likely to provide the most cost-effective solution to avoid losing recently submitted requests? Choose the correct answer from the options given below A. Use an SQS queue to decouple the application components B. Keep one extra EC2 instance always powered on in case a spike occurs C. Use larger instances for your application D. Pre-warm your Elastic Load Balancer

A Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices -at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications

You have been told that you need to set up a bastion host by your manager in the cheapest, most secure way, and that you should be the only person that can access it via SSH. Which of the following setups would satisfy your manager's request? Choose the correct answer from the options below A. A small EC2 instance and a security group which only allows access on port 22 via your IP address B. A large EC2 instance and a security group which only allows access on port 22 via your IP address C. A large EC2 instance and a security group which only allows access on port 22 D. A small EC2 instance and a security group which only allows access on port 22

A The bastion host should only have a security group from a particular IP address for maximum security. Since the request is to have a cheapest infrastructure, then you should use a small instance.

Besides regions and their included availability zones, which of the following is another "regional" data center location used for content distribution? Choose the correct answer from the options below A. Edge Location B. Front Location C. Backend Location D. Cloud Location

A Using a network of edge locations around the world, Amazon CloudFront caches copies of your static content close to viewers, lowering latency when they download your objects and giving you the high, sustained data transfer rates needed to deliver large popular objects to end users at scale.

When reviewing the Auto Scaling events, it is noticed that an application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity? Choose the correct answer from the options below A. Change the scale down CloudWatch metric to a higher threshold B. Increase the instance type in the launch configuration C. Increase the base number of Auto Scaling instances for the Auto Scaling group D. Add provisioned IOPS to the instances

A If the threshold for the scale down is too low then the instances will keep on scaling down rapidly. Hence it is best to keep on optimal threshold for your metrics defined for Cloudwatch.

A customer is looking for a hybrid cloud solution and learns about AWS Storage Gateway. What is the main use case of AWS Storage Gateway? A. It allows to integrate on-premises IT environments with Cloud Storage. B. A direct encrypted connection to Amazon S3. C. It's a backup solution that provides an on-premises Cloud storage. D. It provides an encrypted SSL endpoint for backups in the Cloud.

A Option B is wrong because it is not an encrypted solution to S3 Option C is wrong because you can use S3 as a backup solution Option D is wrong because the SSL endpoint can be achieved via S3 The AWS Storage Gateway's software appliance is available for download as a virtual machine (VM) image that you install on a host in your datacenter. Once you've installed your gateway and associated it with your AWS Account through our activation process, you can use the AWS Management Console to create either gateway-cached volumes, gateway-stored volumes, or a gateway-virtual tape library (VTL), which can be mounted as iSCSI devices by your on-premises applications. You have primarily 2 types of volumes 1) Gateway-cached volumes allow you to utilize Amazon S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data. 2) Gateway-stored volumes store your primary data locally, while asynchronously backing up that data to AWS

In what events would cause Amazon RDS to initiate a failover to the standby replica? Select 3 options. A. Loss of availability in primary Availability Zone B. Loss of network connectivity to primary C. Storage failure on secondary D. Storage failure on primary

A, B, D Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: Loss of availability in primary Availability Zone Loss of network connectivity to primary Compute unit failure on primary Storage failure on primary. Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.

Which of the following will occur when an EC2 instance in a VPC with an associated Elastic IP is stopped and started? Select 2 options. A. The underlying host for the instance can be changed B. The ENI (Elastic Network Interface) is detached C. All data on instance-store devices will be lost D. The Elastic IP will be dissociated from the instance

A, C Find more details here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html EC2 instances are available in EBS backed storage and instance store backed storage. In fact, now more EC2 instances are EBS backed only so we need to consider both options while answering the question. Find more details here : https://aws.amazon.com/ec2/instance-types/ If you have an EBS backed instance store , then the underling host is changed when the instance is stopped and started. And if you have instance store volumes, the data on the instance store devices will be lost.

You are designing various CloudFormation templates, each template to be used for a different purpose. What determines the cost of using the CloudFormation templates? A. CloudFormation does not have a cost itself. B. You are charged based on the size of the template. C. You are charged based on the time it takes to launch the template. D. It has a basic charge of $ 1.10

A. CloudFormation does not have a cost itself. If you look at the AWS Documentation, this is clearly given. You only get charged for the underlying resources created using Cloud Formation templates. So , because of the explanation , all other options automatically become invalid.

Which of the following is a valid bucket name A. demo B. Example C. .example D. demo.

A. demo "Following are the restrictions when naming buckets in S3. Bucket names must be at least 3 and no more than 63 characters long. Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number. Bucket names must not be formatted as an IP address (e.g., 192.168.5.4). When using virtual hosted-style buckets with SSL, the SSL wildcard certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (""."") in bucket names. Option B is invalid because it has an upper case character Option C is invalid because the bucket name cannot start with a period (.). Option D is invalid because the bucket name cannot end with a period (.).

An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume? A. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Remount the Amazon EBS volume. B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume. C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume. D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume

B Here the only option available is to create a new mount volume Option A is wrong because you cannot encrypt a volume once it is created. You would need to use some local encrypting algorithm if you want to encrypt the data on the volume. Option C is wrong because even if you unmounts the volume, you cannot encrypt the volume. Encryption has to be done during volume creation. Option D is wrong because even if the volume is not encrypted, the snapshot will also not be encrypted. You cannot create an encrypted snapshot of an unencrypted volume or change existing volume from unencrypted to encrypted. You have to create new encrypted volume and transfer data to the new volume. The other option is to encrypt a volume's data by means of snapshot copying 1. Create a snapshot of your unencrypted EBS volume. This snapshot is also unencrypted. 2. Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted. 3. Restore the encrypted snapshot to a new volume, which is also encrypted. but that option is not listed.

What is one of the major advantages of having a VPN in AWS? A. You don't have to worry about security, this is managed by AWS. B. You can connect your cloud resources to on-premise data centers using VPN connections C. You can provision unlimited number of S3 resources. D. None of the above

B One of the major advantages is that you can combine your on-premise data center to AWS via a VPN connection. You can create an IPsec, hardware VPN connection between your VPC and your remote network. On the AWS side of the VPN connection, a virtual private gateway provides two VPN endpoints for automatic failover. You configure your customer gateway, which is the physical device or software application on the remote side of the VPN connection.

Which of the following services allow the administrator access to the underlying operating system? Choose the 2 correct answers from the options below A. Amazon RDS B. Amazon EMR C. Amazon EC2 D. DynamoDB

B C "Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. For more information on EC2, please refer to the below link https:// aws.amazon.com/ ec2/ Your security credentials identify you to services in AWS and grant you unlimited use of your AWS resources, such as your Amazon EC2 resources. http:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ UsingIAM.html Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. For more information on EMR, please refer to the below link https:// aws.amazon.com/ emr/ Amazon EMR and applications such as Hadoop need permission to access other AWS resources when running jobs on behalf of users.

After migrating an application architecture from on-premise to AWS you will not be responsible for the ongoing maintenance of packages for which of the following AWS services that your application uses. Choose the 2 correct answers from the options below. A. Elastic Beanstalk B. RDS C. DynamoDB D. EC2

B C Both RDS and DynamoDB are managed solutions provided by AWS. Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. For more information on RDS, please refer to the below link https:// aws.amazon.com/ rds/ Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.

What are three attributes of DynamoDB? Choose the 3 correct answers from the options below A. Used for data warehousing B. A NoSQL database platform C. Uses key-value store D. Fully-managed

B, c, d Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. AWS Redshift can be used for data warehousing.

Which of the following are ways that users can interface with AWS? Select 2 options. A. AWS Cloudfront B. AWS CLI C. AWS Console D. AWS Cloudwatch

B. AWS CLI C. AWS Console

You are planning to use the MySQL RDS in AWS. You have a requirement to ensure that you are available to recover from a database crash. Which of the below is not a recommended practise when you want to fulfil this requirement A. Ensure that automated backups are enabled for the RDS B. Ensure that you use the MyISAM storage engine for MySQL C. Ensure that the database does not grow too large D. Ensure that file sizes for the RDS is well under 16 TB.

B. Ensure that you use the MyISAM storage engine for MySQL

Currently you're helping design and architect a highly available application. After building the initial environment, you've found that part of your application does not work correctly until port 443 is added to the security group. After adding port 443 to the appropriate security group, how much time will it take before the changes are applied and the application begins working correctly? Choose the correct answer from the options below A. Generally, it takes 2-5 minutes in order for the rules to propagate B. Immediately after a reboot of the EC2 instances belong to that security group C. Changes apply instantly to the security group, and the application should be able to respond to 443 requests D. It will take 60 seconds for the rules to apply to all availability zones within the region

C

How are Network access rules evaluated? Choose the correct answer from the options below A. Rules are evaluated by rule number, from highest to lowest, and executed immediately when a matching allow/ deny rule is found. B. All rules are evaluated before any traffic is allowed or denied. C. Rules are evaluated by rule number, from lowest to highest, and executed immediately when a matching allow/ deny rule is found. D. Rules are evaluated by rule number, from lowest to highest, and executed after all rules are checked for conflicting allow/ deny rules.

C

All Amazon EC2 instances are assigned two IP addresses at launch, out of which one can only be reached from within the Amazon EC2 network? A. Multiple IP address B. Public IP address C. Private IP address D. Elastic IP Address

C A private IP address is an IP address that's not reachable over the Internet. You can use private IP addresses for communication between instances in the same network (EC2-Classic or a VPC). When an instance is launched a private IP address is allocated for the instance using DHCP. Each instance is also given an internal DNS hostname that resolves to the private IP address of the instance; for example, ip-10-251-50-12.ec2.internal. You can use the internal DNS hostname for communication between instances in the same network, but we can't resolve the DNS hostname outside the network that the instance is in.

How does using ElastiCache help to improve database performance? Choose the correct answer from the options below A. It can store petabytes of data B. It provides faster internet speeds C. It can store high-taxing queries D. It uses read replicas

C Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Which of the mentioned AWS services uses the concept of shards and is uniquely identified group of data records in a stream? A. Cloudfront B. SQS C. Kinesis D. SES

C. Kinesis In Amazon Kinesis, a shards is a uniquely identified group of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity. Each shard can support up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys).

Which of the below options best describes how EBS snapshots work? A. Each snapshot stores the entire volume in S3 B. Snapshots are not possible for EBS volumes C. Snapshots are incremental in nature and are stored in S3 D. Snapshots are stored in DynamoDB

C. Snapshots are incremental in nature and are stored in S3 You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs. When you delete a snapshot, only the data unique to that snapshot is removed.

You have 5 CloudFormation templates. Each template has been defined for a specific purpose. What determines the cost of using the CloudFormation templates? Choose the correct answer from the options below A. $ 1.10 per template per month B. The length of time it takes to build the architecture with CloudFormation C. It depends on the region the template is created in D. CloudFormation does not have a cost but you are charged for the underlying resources it builds

D

Which feature in AWS allows 2 VPC's to talk to each other? Choose one answer from the options given below A. VPC Connection B. VPN Connection C. Direct Connect D. VPC Peering

D A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region The below diagram shows an example of VPC peering. Now please note that VPC B cannot communicate to VPC C because there is no peering between them.

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets? A. 1 time charge of 1 $ for all the datasets. B. 1 $ per dataset per month C. 10 $ per month for all datasets D. There is no charge for using public data sets

D AWS hosts a variety of public datasets that anyone can access for free. Previously, large datasets such as the mapping of the Human Genome required hours or days to locate, download, customize, and analyze. Now, anyone can access these datasets via the AWS centralized data repository and analyze those using Amazon EC2 instances or Amazon EMR (Hosted Hadoop) clusters. By hosting this important data where it can be quickly and easily processed with elastic computing resources, AWS hopes to enable more innovation, more quickly.

What database service should you choose if you need petabyte-scale data warehousing? Choose the correct answer from the options below A. DynamoDB B. ElastiCache C. RDS D. Redshift

D Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution

What is the base URI for all requests for instance metadata? Choose one answer from the options given below A. http://254.169.169.254/latest/ B. http://169.169.254.254/latest/ C. http://127.0.0.1/latest/ D. http://169.254.169.254/latest/

D Instance metadata is data about your instance that you can use to configure or manage the running instance Because your instance metadata is available from your running instance, you do not need to use the Amazon EC2 console or the AWS CLI. This can be helpful when you're writing scripts to run from your instance. For example, you can access the local IP address of your instance from instance metadata to manage a connection to an external application. http://169.254.169.254/latest/meta-data/

Which of the below are incremental backups of your EBS volumes? Choose one answer from the options given below. A. Volumes B. State Manager C. Placement Groups D. Snapshots

D. Snapshots

Which of the below elements can you manage in the Billing dashboard ? Select 2 options. A. Budgets B. Policies C. Credential Report D. Cost Explorer

A. Budgets D. Cost Explorer

Which of the below mentioned services are the building blocks for creating a basic high availability architecture in AWS. Select 2 options. A. EC2 B. SQS C. Elastic Load Balancer D. Cloudwatch

A. EC2 C. Elastic Load Balancer Having EC2 instances hosting your applications in multiple subnets, hence multiple AZ's and placing them behind an ELB is the basic building block of a high availability architecture in AWS.

You want to get the reason for your EC2 Instance termination from the CLI. Which of the below commands is ideal in getting the reason. A. aws ec2 describe-instances B. aws ec2 describe-images C. aws ec2 get-console-screenshot D. aws ec2 describe-volume-status

A. aws ec2 describe-instances When you execute the AWS ec2 describe-instances CLI command with the instance_id as shown below AWS ec2 describe-instances --instance-id instance_id In the JSON response that's displayed, locate the StateReason element. An example is shown below. This will help in understanding why the instance was shutdown. "StateReason": { "Message": "Client.UserInitiatedShutdown: User initiated shutdown", "Code": "Client.UserInitiatedShutdown" }, Option B is invalid because this command describes one or more of the images (AMIs, AKIs, and ARIs) available to you Option C is invalid because retrieve a JPG-format screenshot of a running instance. This might not help to the complete extent of understanding why the instance was terminated. Option D is invalid because this command describes the status of the specified volumes.

You want to ensure that you keep a check on the Active Volumes , Active snapshots and Elastic IP addresses you use so that you don't go beyond the service limit. Which of the below services can help in this regard? A. AWS Cloudwatch B. AWS EC2 C. AWS Trusted Advisor D. AWS SNS

C. AWS Trusted Advisor An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Below is a snapshot of the service limits it can monitor. Option A is invalid because even though you can monitor resources , it cannot be checked against the service limit. Option B is invalid because this is the Elastic Compute cloud service Option D is invalid because it can be send notification but not check on service limits

Which of the below instances is used normally for massive parallel computations? A. Spot Instances B. On-Demand Instances C. Dedicated Instances D. This is not possible in AWS

A. Spot Instances

When working with API gateways in AWS , what is the type of endpoints that are exposed A. HTTP B. HTTPS C. JSON D. XML

B. HTTPS All of the endpoints created with the API gateway are of HTTPS. Option A is incorrect because Amazon API Gateway does not support unencrypted (HTTP) endpoints Option C and D are invalid because API gateway expose HTTPS endpoints only

You are a systems administrator and you need to monitor the health of your production environment. You decide to do this using Cloud Watch, however you notice that you cannot see the health of every important metric in the default dash board. Which of the following metrics do you need to design a custom cloud watch metric for, when monitoring the health of your EC2 instances? A. CPU Usage B. Memory usage C. Disk read operations D. Network in

B. Memory usage When you look at your cloudwatch metric dashboard, you can see the metrics for CPU Usage , Disk read operations and Network in You need to add a custom metric for Memory Usage.

Which of the following criteria are some of which must be met when attaching an EC2 instance to an existing AutoScaling Group ? Select 3 options. A. The instance is in the running state. B. The AMI used to launch the instance must still exist. C. The instance is not a member of another Auto Scaling group. D. They should have the same private key

A. The instance is in the running state. B. The AMI used to launch the instance must still exist. C. The instance is not a member of another Auto Scaling group.

What can be used from AWS to import existing Virtual Machines Images into AWS? A. VM Import/ Export B. AWS Import/ Export C. AWS Storage Gateway D. This is not possible in AWS

A. VM Import/ Export

Is it true that EBS can always tolerate an Availability Zone failure? A. No, all EBS volume is stored in a single Availability Zone B. Yes, EBS volume has multiple copies so it should be fine C. Depends on how it is setup D. Depends on the Region where EBS volume is initiated

A EBS Volume replicated to physical hardware with in the same available zone, So if AZ fails then EBS volume will fail. That's why AWS recommend to always keep EBS volume snapshot in S3 bucket for high durability. When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to the failure of any single hardware component. Option B is wrong as EBS volume has multiple copies but with in same AZ , so volume will not persist in case of AZ failure. Option C is wrong because there is no special setup available to persist EBS volume across region or AZ. Answer D is wrong as EBS volume has same behavior regardless of region.

One of your instances is reporting an unhealthy system status check. However, this is not something you should have to monitor and repair on your own. How might you automate the repair of the system status check failure in an AWS environment? Choose the correct answer from the options given below A. Create CloudWatch alarms that stop and start the instance based off of status check alarms B. Write a script that queries the EC2 API for each instance status check C. Write a script that periodically shuts down and starts instances based on certain stats. D. Implement a third party monitoring tool.

A Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.

An EC2 instance retrieves a message from an SQS queue, begins processing the message, then crashes. What happens to the message? Choose the correct answer from the options below: A. When the message visibility timeout expires, the message becomes available for processing by other EC2 instances B. It will remain in the queue and still assigned to same EC2 instances when instances become online within visibility timeout. C. The message is deleted and becomes duplicated when the EC2 instance comes online.

A When a consumer receives and processes a message from a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message: Because it's a distributed system, there is no guarantee that the component will actually receive the message (the connection can break or a component can fail to receive the message). Thus, the consumer must delete the message from the queue after receiving and processing it. Q: How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them multiple times? Every Amazon SQS queue has a configurable visibility timeout. A message is not visible to any other reader for a designated amount of time when it is read from a message queue. As long as the amount of time it takes to process the message is less than the visibility timeout, every message is processed and deleted. If the component processing of the message fails or becomes unavailable, the message again becomes visible to any component reading the message queue once the visibility timeout ends. This allows multiple components to read messages from the same message queue, each one working to process different messages.

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method would be used to serve content that is stored in S3, but not publicly accessible from S3 directly? Choose the correct answer from the options given below A. Create an Origin Access Identify (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI B. Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM user. C. Create a S3 bucket policy that lists the CloudFront distribution ID as the principal and the target bucket as the Amazon Resource Name (ARN) D. Add the CloudFront account security group

A You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it. To require that users access your content through CloudFront URLs, you perform the following tasks: Create a special CloudFront user called an origin access identity. Give the origin access identity permission to read the objects in your bucket. Remove permission for anyone else to use Amazon S3 URLs to read the objects.

As an IT administrator you have been requested to ensure you create a highly decouple application in AWS. Which of the following help you accomplish this goal? Choose the correct answer from the options below A. An SQS queue to allow a second EC2 instance to process a failed instance's job B. An Elastic Load Balancer to send web traffic to healthy EC2 instances C. IAM user credentials on EC2 instances to grant permissions to modify an SQS queue D. An Auto Scaling group to recover from EC2 instance failures

A Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS is the best option for creating a decoupled application.

Which Amazon service can I use to define a virtual network that closely resembles a traditional data center? A. Amazon VPC B. Amazon ServiceBus C. Amazon EMR D. Amazon RDS

A Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration for your Amazon Virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet

Your company has moved a legacy application from an on-premise data center to the cloud. The legacy application requires a static IP address hard-coded into the backend, which prevents you from deploying the application with high availability and fault tolerance using the ELB. Which steps would you take to apply high availability and fault tolerance to this application? Select 2 options. A. Write a custom script that pings the health of the instance, and, if the instance stops responding, switches the elastic IP address to a standby instance B. Ensure that the instance it's using has an elastic IP address assigned to it C. Do not migrate the application to the cloud until it can be converted to work with the ELB and Auto Scaling D. Create an AMI of the instance and launch it using Auto Scaling which will deploy the instance again if it becomes unhealthy

A, B The best option is to configure an Elastic IP that can be switched between a primary and failover instance.

A company has resources hosted in AWS and on on-premise servers. You have been requested to create a de-coupled architecture for applications which make use of both types of resources? Which of the below options are valid? Select 2 options. A. You can leverage SWF to utilize both on-premises servers and EC2 instances for your decoupled application B. SQS is not a valid option to help you use on-premises servers and EC2 instances in the same application, as it cannot be polled by on-premises servers C. You can leverage SQS to utilize both on-premises servers and EC2 instances for your decoupled application D. SWF is not a valid option to help you use on-premises servers and EC2 instances in the same application, as on-premises servers cannot be used as activity task workers

A, C You can use both SWF and SQS to coordinate with EC2 instances and on-premise servers. Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. For more information on SQS, please refer to the below link https://aws.amazon.com/sqs/ The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state.

You are creating a Provisioned IOPS volume in AWS. The size of the volume is 8 GiB. Which of the following are the possible values that can put for the IOPS of the volume A. 400 B. 500 C. 600 D. 1000

A. 400 The Maximum ratio of IOPS to volume size is 50: 1 , so if the volume size is 8 GiB , the maximum IOPS of the volume can be 400. If you go beyond this value , you will get an error as shown in the screenshot below.

You have some EC2 instances hosted in your AWS environment. You have a concern that not all of the EC2 instances are being utilized. Which of the below mentioned services can help you find underutilized resources in AWS ? Select 2 options. A. AWS Cloudwatch B. SNS C. AWS Trusted Advisor D. Cloudtrail

A. AWS Cloudwatch C. AWS Trusted Advisor The AWS Trusted Advisor can help you identify underutilized resources in AWS. For more information on AWS trusted advisor please visit the below URL: https:// aws.amazon.com/ premiumsupport/ trustedadvisor/ If You look at the Cloudwatch graphs, the CPU utilization of your resources and you can see the trend over time in the graphs. For more information on AWS Cloudwatch please visit the below URL: https:// aws.amazon.com/ cloudwatch/

You have an EC2 instance located in a subnet in AWS. You have installed a web application on this instance. The security group attached to this instance is 0.0.0.0/ 0. The VPC has 10.0.0.0/ 16 attached to it. You can SSH into the instance from the internet, but you are not able to access the web server via the web browser. Which of the below steps would resolve the issue? A. Add an HTTP rule to the Security Group B. Remove the SSH rule from the security group C. Add the route 10.0.0.0/ 16 -> igw-a97272cc to the Route Table D. Add the route 0.0.0.0/ 0 -> local to the Route Table

A. Add an HTTP rule to the Security Group You need to add the following security rule so that you can access HTTP traffic to the server. Add the rules to the security group as desired. Option B is invalid because then you will not be able to access the server via SSH Option C and D are invalid because these routes are not ideal routes to add to the VPC.

You have created your own VPC and subnet in AWS. You have launched an instance in that subnet. You have attached an internet gateway to the VPC and seen that the instance has a public IP. The Route table is 10.0.0.0/ 16. The instance still cannot be reached from the Internet. Which of the below changes need to be made to the route table to ensure that the issue can be resolved. A. Add the following entry to the route table -0.0.0.0/ 0-> Internet Gateway B. Modify the above route table -10.0.0.0/ 16 -> Internet Gateway C. Add the following entry to the route table -10.0.0.0/ 16 -> Internet Gateway D. Add the following entry to the route table -0.0.0.0/ 16-> Internet Gateway

A. Add the following entry to the route table -0.0.0.0/ 0-> Internet Gateway The Route table need to be to 0.0.0.0/ 0 to ensure that the routes from the internet can reach the instance Hence by default all other options become invalid

You are not able to connect to an EC2 instance via SSH, and you have already verified that the instance has a public IP and the Internet gateway and route tables are in place, what should you check next? A. Adjust the security group to allow traffic to port 22 B. Adjust the security group to allow traffic to port 3389 C. Restart the instance since there might be some issue with the instance D. Create a new instance since there might be some issue with the instance

A. Adjust the security group to allow traffic to port 22 The reason why you cannot connect to the instance is because maybe the SSH protocol has not been enabled in the security group. Go to your EC2 Security groups, click on the required security groups to make the changes. Go to the Inbound Tab. Ensure that the Inbound rules has a rule for SSH protocol.

Which is the service provided by AWS for collecting and processing large streams of data in real time? A. Amazon Kinesis B. AWS Data Pipeline C. Amazon AppStream D. Amazon Simple Queue Service

A. Amazon Kinesis Use Amazon Kinesis Streams to collect and process large streams of data records in real time. You'll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.

You have a set of IIS Servers running on EC2 instances for a high traffic web site. You want to collect and process the log files generated from the IIS Servers. Which of the below services is ideal to run in this scenario A. Amazon S3 for storing the log files and Amazon EMR for processing the log files B. Amazon S3 for storing the log files and EC2 Instances for processing the log files C. Amazon EC2 for storing and processing the log files D. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts

A. Amazon S3 for storing the log files and Amazon EMR for processing the log files Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. Option B and C , even though partially correct would be an overhead for EC2 Instances to process the log files when you already have a ready made service which can help in this regard Option D is in invalid because DynamoDB is not an ideal option to store log files.

In order for an EC2 instance to be accessed from the internet, which of the following are required. Choose 3 answers from the options given below A. An Internet gateway attached to the VPC B. A private IP address attached to the instance C. A public IP address attached to the instance D. A route entry to the Internet gateway in the Route table

A. An Internet gateway attached to the VPC C. A public IP address attached to the instance D. A route entry to the Internet gateway in the Route table The image shows the in KB Article configuration of an instance which can be accessed from the internet. The key requirements are 1) An Internet gateway attached to the VPC 2) A public IP or elastic IP address attached to the instance 3) A route entry to the Internet gateway in the Route table Option B is invalid, because this is only required for communication between instances in the VPC.

What is the service name in AWS that can display costs in a chart format? A. Cost Explorer B. Cost Allocation Tags C. AWS Budgets D. Payment History

A. Cost Explorer Cost Explorer is a free tool that you can use to view charts of your costs (also known as spend data) for up to the last 13 months, and forecast how much you are likely to spend for the next three months. You can use Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data you want to see, and you can view time data by day or by month.

When it comes to API credentials, what is the best practise recommended by AWS? A. Create a role which has the necessary and can be assumed by the EC2 instance. B. Use the API credentials from an EC2 instance. C. Use the API credentials from a bastion host. D. Use the API credentials from a NAT Instance.

A. Create a role which has the necessary and can be assumed by the EC2 instance. IAM roles are designed in such a way so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Option B, C and D are invalid because it is not secure to use API credentials from any EC2 instance. The API credentials can be tampered with and hence is not the ideal secure way to make API calls.

In the shared responsibility model, what is the customer not responsible for? A. Edge locations B. Installation of custom firewall software C. Security Groups D. Applying an SSL Certificate to an ELB

A. Edge locations AWS has published the Shared Responsibility Model. And the Physical networking comes as part of the responsibility of AWS.

Which of the following databases support the read replica feature? Select 3 options. A. MySQL B. MariaDB C. PostgreSQL D. Oracle

A. MySQL B. MariaDB C. PostgreSQL Read replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL. When you create a read replica, you specify an existing DB Instance as the source. Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL, MariaDB and PostgreSQL, Amazon RDS uses those engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.

What are the main benefits of AWS regions? Select 2 options. A. Regions allow you to design applications to conform to specific laws and regulations for specific parts of the world. B. All regions offer the same service at the same prices. C. Regions allow you to choose a location in any country in the world. D. Regions allow you to place AWS resources in the area of the world closest to your customers who access those resources.

A. Regions allow you to design applications to conform to specific laws and regulations for specific parts of the world. D. Regions allow you to place AWS resources in the area of the world closest to your customers who access those resources. AWS developer data centers across the world to help develop solutions that are close to the customer as possible. They also have center in core countries to help tie up with the specific laws and regulations for specific parts of the world. AWS does not have centers in every location of the world, hence option C is invalid. Services and prices are specific to every region, hence option B is invalid.

Which of the following services provides an object store which can also be used to store files ? A. S3 B. SQS C. SNS D. EC2

A. S3 Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

What are the different types of scale out options available in the AutoScaling service provided by AWS? Select 3 options. A. Scheduled Scaling B. Dynamic Scaling C. Manual Scaling D. Static Scaling

A. Scheduled Scaling B. Dynamic Scaling C. Manual Scaling

You currently have an EC2 instance hosting a web application. The number of users is expected to increase in the coming months and hence you need to add more elasticity to your setup. Which of the following methods can help add elasticity to your existing setup. Choose 2 answers from the options given below A. Setup your web app on more EC2 instances and set them behind an Elastic Load balancer B. Setup an Elastic Cache in front of the EC2 instance. C. Setup your web app on more EC2 instances and use Route53 to route requests accordingly. D. Setup DynamoDB behind your EC2 Instances

A. Setup your web app on more EC2 instances and set them behind an Elastic Load balancer C. Setup your web app on more EC2 instances and use Route53 to route requests accordingly. "The Elastic Load balancer is one of the most the ideal solution for adding elasticity to your application. The below snapshot is an example where you can add 3 EC2 Instances to an ELB. All requests can then be routed accordingly to these instances. The other alternative is to create a routing policy in Route53 with Weighted routing policy . Weighted resource record sets let you associate multiple resources with a single DNS name. Weighted routing policy enables Route 53 to route traffic to different resources in specified proportions (weights). To create a group of weighted resource record sets, two or more resource record sets can be created that have the same combination of DNS name and type, and each resource record set is assigned a unique identifier and a relative weight. Option B is not valid because this will just cache the reads , and will not add that desired elasticity to your application. Option D is not valid , because there is no mention of a persistence layer in the question , that would require the use of DynamoDB.

What are bastion hosts? A. They are instances in the public subnet which are used as a jump server to resources within other subnets. B. They are instances in the private subnet which are used as a jump server to resources within other subnets. C. They are instances in the public subnet which are used to host web resources that can be accessed by users. D. They are instances in the private subnet which are used to host web resources that can be accessed by users.

A. They are instances in the public subnet which are used as a jump server to resources within other subnets. As the number of EC2 instances in your AWS environment grows, so too does the number of administrative access points to those instances. Depending on where your administrators connect to your instances from, you may consider enforcing stronger network-based access controls. A best practice in this area is to use a bastion. A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances. The below picture from the AWS documentation shows the setup of the bastion hosts in a public subnet. Option B is invalid because bastion hosts need to be in the public subnet Option C and D are invalid because bastion hosts are not used to host web resources.

A company has an EC2 instance that is hosting a web solution which is mostly used for read-only purposes. The CPU utilization is constantly 100% on the EC2 instance. Which of the below solutions can help alleviate and provide a quick resolution to the problem. A. Use Cloudfront and place the EC2 instance as the origin B. Let the EC2 instance continue to run at 100%, since the AWS environment can handle the load. C. Use SNS to notify the IT admin when it reaches 100% so that they can disconnect some sessions to help alleviate the load D. Use SES to notify the IT admin when it reaches 100% so that they can disconnect some sessions to help alleviate the load

A. Use Cloudfront and place the EC2 instance as the origin Cloudfront can be used alleviate the load on web based solutions by caching the recent reads in its edge locations and reduce the burden on the EC2 instance. Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets

An image named photo.jpg has been uploaded to a bucket named examplebucket in the us-east-1 region. Which of the below is the right URL to access the image, if it were made public ? Consider that S3 is used as a static website. A. http:// examplebucket.s3-website-us-east-1. amazonaws.com/ photo.jpg B. http:// examplebucket.website-us-east-1. amazonaws.com/ photo.jpg C. http:// examplebucket.s3-us-east-1. amazonaws.com/ photo.jpg D. http:// examplebucket.amazonaws.s3-website-us-east-1./ photo.jpg

A. http:// examplebucket.s3-website-us-east-1. amazonaws.com/ photo.jpg The URL for a S3 web site is as shown in the KB Article < bucket-name >. s3-website-< AWS-region >. amazonaws.com Hence the right option in option A When you configure a bucket for website hosting, the website is available via the region-specific website endpoint. Website endpoints are different from the endpoints where you send REST API requests. For more information about the differences between the endpoints, see Key Differences Between the Amazon Website and the REST API Endpoint. The two general forms of an Amazon S3 website endpoint are as follows: --> bucket-name.s3-website-region.amazonaws.com --> bucket-name.s3-website.region.amazonaws.com Which form is used for the endpoint depends on what Region the bucket is in. For example, if your bucket is named example-bucket and it resides in the US East (N. Virginia) region, the website is available at the following Amazon S3 website endpoint: For more information on the bucket and the URL format for S3 buckets , please visit the below urls: http:// docs.aws.amazon.com/ AmazonS3/ latest/ dev/ WebsiteEndpoints.html http:// docs.aws.amazon.com/ AmazonS3/ latest/ dev/ HostingWebsiteOnS3Setup.html

In Amazon CloudWatch what is the retention period for a one minute datapoint. Choose the right answer from the options given below A. 10 days B. 15 days C. 1 month D. 1 year

B Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Below is the retention period for the various data points CloudWatch Metrics now supports the following three retention schedules: 1 minute datapoints are available for 15 days 5 minute datapoints are available for 63 days 1 hour datapoints are available for 455 days

A company wants to host a selection of MongoDB instances. They are expecting a high load and want to have as low latency as possible. Which class of instances from the below list should they choose from. A. T2 B. I2 C. T1 D. G2

B I2 instances are optimized to deliver tens of thousands of low-latency, random I/ O operations per second (IOPS) to applications. They are well suited for the following scenarios: NoSQL databases (for example, Cassandra and MongoDB) Clustered databases Online transaction processing (OLTP) systems

A company is running three production web server reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below A. Consider using on-demand instances instead of reserved EC2 instances B. Consider not using a Multi-AZ RDS deployment for the development database C. Consider using spot instances instead of reserved EC2 instances D. Consider removing the Elastic Load Balancer

B Multi-AZ databases is better for production environments rather than for development environments, so you can reduce costs by not using this for development environments Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

What is an AWS service which can help protect web applications from common security threats from the outside world? Choose one answer from the options below A. NAT B. WAF C. SQS D. SES

B Option A is wrong because this is used to relay information from private subnets to the internet. Option C is wrong because this is used as a queuing service in aws. Option D is wrong because this is used as an emailing service in aws. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules. In WAF, you can create a set of Conditions and Rules to protect your network against attacks from outside.

You work for a major news network in Europe. They have just released a new app which allows users to report on events as and when they happen using their mobile phone. Users are able to upload pictures from the app and then other users will be able to view these pics. Your organization expects this app to grow very quickly, essentially doubling its user base every month. The app uses S3 to store the media and you are expecting sudden and large increases in traffic to S3 when a major news event takes place as people will be uploading content in huge numbers). You need to keep your storage costs to a minimum however and it does not matter if some objects are lost. Which storage media should you use to keep costs as low as possible? A. S3 -Infrequently Accessed Storage. B. S3 -Reduced Redundancy Storage (RRS). C. Glacier. D. S3 -Provisioned IOPS.

B Since the requirement mentions that it does not matter if objects are lost and you need a low cost storage option then Reduced Redundancy Storage is the best option. The AWS Documentation mentions the below on Reduced Redundancy Storage Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3' s standard storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced

If you cannot connect to your Ec2 instance via remote desktop, and you have already verified the instance has a public IP and the Internet gateway and route tables are in place, what should you check next? Choose one answer from the options given below A. Adjust the security group to allow traffic from port 22 B. Adjust the security group to allow traffic from port 3389 C. Restart the instance since there might be some issue with the instance D. Create a new instance since there might be some issue with the instance

B The reason why you cannot connect to the instance is because by default RDP protocol will not be enabled on the Security Group. Option A is wrong because this is for the SSH protocol and here we want to RDP into the instance. Option C and D are wrong because there is no mention of anything wrong with the instance. Step 1) Go to your EC2 Security groups, click on the required security groups to make the changes. Go to the Inbound Tab. Step 2) Make sure to add a rule for the RDP protocol for the instance and then click the Save button.

You are a solutions architect working for a company. They store their data on S3, however recently an someone accidentally deleted some critical files in S3. You've been asked to prevent this from happening in the future. What options below can prevent this? A. Make sure you provide signed URL's to all users. B. Enable S3 versioning and Multifactor Authentication (MFA) on the bucket. C. Use S3 Infrequently Accessed storage to store the data on. D. Create an IAM bucket policy that disables deletes.

B Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. You can optionally add another layer of security by configuring a bucket to enable MFA (Multi-Factor Authentication) Delete, which requires additional authentication for either of the following operations. 1) Change the versioning state of your bucket 2) Permanently delete an object version Option A is invalid because this would be a maintenance overhead Option C is invalid because changing the storage option will not prevent accidental deletion. Option D is invalid because the question does not ask to remove the delete permission completely. For more information on S3 versioning, please refer to the below URL: http:// docs.aws.amazon.com/ AmazonS3/ latest/ dev/ Versioning.html

Before I delete an EBS volume, what can I do if I want to recreate the volume later? A. Create a copy of the EBS volume (not a snapshot) B. Store a snapshot of the volume C. Download the content to an EC2 instance D. Back up the data in to a physical disk

B After you no longer need an Amazon EBS volume, you can delete it. After deletion, its data is gone and the volume can't be attached to any instance. However, before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later. See more details here : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-volume.html Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer's requirements? A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics. B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs

B Amazon Kinesis is the best option for analyzing logs in real time The AWS documentation mentions the following for AWS Kinesis Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.

You are a consultant tasked with migrating an on-premise application architecture to AWS. During your design process you have to give consideration to current on-premise security and determine which security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model? Choose the correct answer from the options given below A. Customer Data B. Physical network infrastructure C. Instance security D. User access to the AWS environment

B As per the Shared responsibility model, the Physical network infrastructure is taken care by AWS. The below diagram clearly shows what has to be managed by customer and what is managed by AWS.

The Availability Zone that your RDS database instance is located in is suffering from outages, and you have lost access to the database. What could you have done to prevent losing access to your database (in the event of this type of failure) without any downtime? Choose the correct answer from the options below A. Made a snapshot of the database B. Enabled multi-AZ failover C. Increased the database instance size D. Created a read replica

B The best option is to enable Multi-AZ for the database. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

What is the purpose of an SWF decision task? Choose the correct answer from the options below A. It tells the worker to perform a function. B. It tells the decider the state of the work flow execution. C. It defines all the activities in the workflow. D. It represents a single task in the workflow.

B A decider is an implementation of the coordination logic of your workflow type that runs during the execution of your workflow. You can run multiple deciders for a single workflow type. Because the execution state for a workflow execution is stored in its workflow history, deciders can be stateless. Amazon SWF maintains the workflow execution history and provides it to a decider with each decision task

Which of the following is an example of synchronous replication which occurs in the AWS service? A. AWS RDS Read Replica's for MySQL, MariaDB and PostgreSQL B. AWS Multi-AZ RDS C. Redis engine for Amazon ElastiCache replication D. AWS RDS Read Replica's for Oracle

B. AWS Multi-AZ RDS Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). For more information on Multi-AZ, please visit the below URL: https:// aws.amazon.com/ rds/ details/ multi-az/ Option A is invalid because Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL, MariaDB and PostgreSQL, Amazon RDS uses those engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. Option C is invalid, because the Redis engine for Amazon ElastiCache supports replication with automatic failover, but the Redis engine's replication is asynchronous Option D is invalid because this is not supported by AWS.

You work for a company who are deploying a hybrid cloud approach. Their legacy servers will remain on premise within their own datacenter however they will need to be able to communicate to the AWS environment over a site to site VPN connection. What do you need to do to establish the VPN connection? A. Connect to the environment using AWS Direct Connect. B. Assign a static routable address to the customer gateway C. Create a dedicated NAT and deploy this to the public subnet. D. Update your route table to add a route for the NAT to 0.0.0.0/ 0.

B. Assign a static routable address to the customer gateway This requirement is given in the AWS documentation for the customer gateway. The traffic from the VPC gateway must be able to leave the VPC and traverse through the internet onto the customer gateway. Hence the customer gateway needs to be assigned a static IP that can be routable via the internet.

You run a website which hosts videos and you have two types of members, premium fee paying members and free members. All videos uploaded by both your premium members and free members are processed by a fleet of EC2 instances which will poll SQS as videos are uploaded. However you need to ensure that your premium fee paying members' videos have a higher priority than your free members. How do you design SQS? A. SQS allows you to set priorities on individual items within the queue, so simply set the fee paying members at a higher priority than your free members. B. Create two SQS queues, one for premium members and one for free members. Program your EC2 fleet to poll the premium queue first and if empty, to then poll your free members SQS queue. C. SQS would not be suitable for this scenario. It would be much better to use SNS to encode the videos. D. Use SNS to notify when a premium member has uploaded a video and then process that video accordingly.

B. Create two SQS queues, one for premium members and one for free members. Program your EC2 fleet to poll the premium queue first and if empty, to then poll your free members SQS queue. In this case, you can have multiple SQS queues. The SQS queues for the premium members can be polled first by the EC2 Instances and then those messages can be processed.

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security. A. Save the API credentials to your php files. B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. C. Save your API credentials in a public Github repository. D. Pass API credentials to the instance using instance userdata.

B. Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it's challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.

There is a requirement to host a NoSQL database with a need for low latency. Which class of instances from the below list should they choose from ? A. T2 B. I2 C. T1 D. G2

B. I2 I2 instances are optimized to deliver tens of thousands of low-latency, random I/ O operations per second (IOPS) to applications. They are well suited for the following scenarios: NoSQL databases (for example, Cassandra and MongoDB) Clustered databases online transaction processing (OLTP) systems. For more information on I2 instances, please visit the link: https:// aws.amazon.com/ blogs/ aws/ amazon-ec2-new-i2-instance-type-available-now/

You have launched two web servers in private subnet and one ELB (internet facing) in public subnet in your VPC. Yet, you are still unable to access your web application through the internet, which of the following would likely the cause of this? Choose two correct options A. Web server must be launched inside public subnet and not private subnet. B. Route table for public subnet is not configured to route to VPC internet gateway. C. No elastic IP is assigned to web servers. D. No internet gateway is attached to the VPC.

B. Route table for public subnet is not configured to route to VPC internet gateway. D. No internet gateway is attached to the VPC.

You have several AWS reserved instances in your account. They have been running for some time, but now need to be shutdown since they are no longer required. The data is still required for future purposes. Which of the below possible 2 steps can be taken. A. Convert the instance to on-demand instances B. Sell the instances on the AWS Reserved Instance Marketplace C. Take snapshots of the EBS volumes and terminate the instances D. Convert the instance to spot instances

B. Sell the instances on the AWS Reserved Instance Marketplace C. Take snapshots of the EBS volumes and terminate the instances The Reserved Instance Marketplace is a platform that supports the sale of third-party and AWS customers' unused Standard Reserved Instances, which vary in term lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity For more information on selling instances, please visit the below URL: http:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ ri-market-general.html Since the data is still required , it's better to take snapshots of the existing volumes and then terminate the instances.

You are designing a site for a new start up which generates cartoon images for people automatically. Customers will log on to the site, upload an image which is stored in S3. The application then passes a job to AWS SQS and a fleet of EC2 instances poll the queue to receive new processing jobs. These EC2 instances will then turn the picture in to a cartoon and will then need to store the processed job somewhere. Users will typically download the image once (immediately), and then never download the image again. What is the most commercially feasible method to store the processed images? A. Rather than use S3, store the images inside a BLOB on RDS with Multi-AZ configured for redundancy. B. Store the images on S3 RRS, and create a lifecycle policy to delete the image after 24 hours. C. Store the images on glacier instead of S3. D. Use elastic block storage volumes to store the images.

B. Store the images on S3 RRS, and create a lifecycle policy to delete the image after 24 hours. "Use the AWS Reduced Redundancy storage to save on costs. The use lifecycle policies to delete the data since it is not required. For more information on AWS Reduced Redundancy storage , please refer to the below link https:// aws.amazon.com/ s3/ reduced-redundancy/ The AWS Documentation mentions the following on Lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects For more information on S3 Lifecycle policies , please refer to the below link http:// docs.aws.amazon.com/ AmazonS3/ latest/ dev/ object-lifecycle-mgmt.html "

You have 2 Ubuntu instances located in different subnets in the same VPC. Now to your understanding these instances should be able to communicate with each other, but when you try to ping from one instance to another, you get a timeout. The Route tables seem to be valid and has the entry for the Target 'local' for your VPC CIDR. Which of the following could be a valid reason for this issue. A. The Instances are of the wrong AMI , hence you are not able to ping the instances. B. The Security Group has not been modified for allow the required traffic. C. The Instances don't have Public IP, so that the ping commands can be routed D. The Instances don't have Elastic IP, so that the ping commands can be routed

B. The Security Group has not been modified for allow the required traffic. The security groups need to configured to ensure that ping commands can go through. The below snapshot shows that the ICMP protocol needs to be allowed to ensure that the ping packets can be routed to the instances. You need to edit the Inbound Rules of the Web Security Group. Option A is invalid because the AMI will not impact the ping command Option C and D are invalid because even if you have a Public IP and Elastic IP allocated to the Instance, you need to ensure there is a route to the internet gateway and the Web Security Groups are configured accordingly.

Your application is on an EC2 instance in AWS. Users use the application to upload a file to S3. The message first goes to an SQS queue , before it is picked up by a worker process, which fetches the object and uploads it to S3. An email is then sent on successful completion of the upload. You notice though that you are getting numerous emails for each request, when ideally you should be getting only one final email notification for each successful upload. Which of the below could be the possible reasons for this. A. The application is configured for long polling so the messages are being picked up multiple times. B. The application is not deleting the messages from SQS. C. The application is configured to short polling, so some messages are not being picked up D. The application is not reading the message properly from the SQS queue.

B. The application is not deleting the messages from SQS. When you look at the Message lifecycle from AWS for SQS queues , one of the most important aspect is to delete the messages after they have been read from the queue. Option A and C are invalid because even if you use short or long polling , the application should be able to read the messages eventually. The main part is that the deletion of messages is not happening after they have been read. Option D is invalid because if the messages are not being read properly , then the application should not send successful notifications.

You wanted to have a VPC created in AWS which will host an application. The application will just consist of web and database servers. The application just requires to be accessed from the internet by internet users. Which of the following VPC configuration wizards options would you use A. VPC with a Single Public Subnet Only B. VPC with Public and Private Subnets C. VPC with Public and Private Subnets and Hardware VPN Access D. VPC with a Private Subnet Only and Hardware VPN Access

B. VPC with Public and Private Subnets The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application, while maintaining back-end servers that aren't publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers. Option A is invalid, because ideally you need a private subnet to host the database server. Option C and D are invalid because there is no case of accessing the application from on premise locations using VPN connections.

Your supervisor asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company's on-premises data center. Which of these are you least likely to recommend as part of that process? Choose the correct answer from the options below: A. SQS polling from an EC2 instance deployed with an IAM role B. An SWF workflow C. SQS polling from an EC2 instance using IAM user credentials D. SQS polling from an on-premises server using IAM user credentials

C "Note that the question asks you for the least likely recommended option. The correct answer is C , SQS polling from an EC2 instance using IAM user credentials. An EC2 role should be used when deploying EC2 instances to grant permissions rather than storing IAM user credentials in EC2 instances. You should use IAM roles for secure communication between EC2 instances and resources on AWS. Your most likely scenario will actually be SQS polling from an EC2 instance deployed with an IAM role because when you're polling SQS from EC2 you should use IAM roles. What you should never do is use IAM user api keys for authentication to poll sqs messages. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user. The most likely scenario will actually be SQS polling from an EC2 instance deployed with an IAM role because when your polling SQS from EC2 you should use IAM roles. We should never use IAM user api keys for authentication to poll SQS messages. Option C is correct which least likely scenario is.

What is the basic requirement to login into an EC2 instance on the AWS cloud? A. Volumes B. AMI's C. Key Pairs D. S3

C Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public-key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance. Linux instances have no password, and you use a key pair to log in using SSH. With Windows instances, you use a key pair to obtain the administrator password and then log in using RDP. When you launch an EC2 instance, you will either be asked to create a new key pair or an existing key pair. This is .pem file which can then use to log into your instance.

There is a company website that is going to be launched in the coming weeks. There is a probability that the traffic will be quite high in the first couple of weeks. In the event of a load failure, how can you set up DNS failover to a static website? Choose the correct answer from the options given below. A. Duplicate the exact application architecture in another region and configure DNS weight-based routing B. Enable failover to an on-premise data center to the application hosted there. C. Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution. D. Add more servers in case the application fails.

C Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a cloudfront distribution.

Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data? A. Maintain two snapshots: the original snapshot and the latest incremental snapshot. B. Maintain a volume snapshot; subsequent snapshots will overwrite one another C. Maintain a single snapshot the latest snapshot is both Incremental and complete. D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.

C EBS snapshots are incremental and complete, so you don't need to maintain multiple snapshots if you are looking on reducing costs. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard. AWS Docs provides following details: In state 3, the volume has not changed since State 2, but Snapshot A has been deleted. The 6 GiB of data stored in Snapshot A that were referenced by Snapshot B have now been moved to Snapshot B, as shown by the heavy arrow. As a result, you are still charged for storing 10 GiB of data—6 GiB of unchanged data preserved from Snap A, and 4 GiB of changed data from Snap B. So as the diagram in AWS Document shows when we take multiple snapshots, the volume that occupied would be Snap A (10 GiB) + Snap B (4 GiB) + Snap C (2 GB) which is in total 16 GiB. In real production environment, the data on the EBS is changing every minute or even every second. So taking multiple snapshots would definitely consume more storage. Hence, Option C is correct which provide lowest cost and at the same time able to fully restore the data. Please refer to the following document on how incremental backup works: https:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ EBSSnapshots.html# how_snapshots_work

What is the difference between an availability zone and an edge location? Choose the correct answer from the options below A. Edge locations are used as control stations for AWS resources B. An edge location is used as a link when building load balancing between regions C. An Availability Zone is an isolated location inside a region; an edge location will deliver cached content to the closest location to reduce latency D. An availability zone is a grouping of AWS resources in a specific region; an edge location is a specific resource within the AWS region

C Edge locations Using a network of edge locations around the world, Amazon CloudFront caches copies of your static content close to viewers, lowering latency when they download your objects and giving you the high, sustained data transfer rates needed to deliver large popular objects to end users at scale.

You are IOT sensors to monitor the number of bags that are handled at an airport. The data gets sent back to a Kinesis stream with default settings. Every alternate day, the data from the stream is sent to S3 for processing. But you notice that S3 is not receiving all of the data that is being sent to the Kinesis stream. What could be the reason for this? A. The sensors probably stopped working on some days hence data is not sent to the stream. B. S3 can only store data for a day C. Data records are only accessible for a default of 24 hours from the time they are added to a stream D. Kinesis streams are not meant to handle IoT related data

C Kinesis Streams supports changes to the data record retention period of your stream. An Kinesis stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. An Kinesis stream stores records from 24 hours by default, up to 168 hours. Option A , even though a possibility , cannot be taken for granted as the right option. Option B is invalid since S3 can store data indefinitely unless you have a lifecycle policy defined. Option D is invalid because the Kinesis service is perfect for this sort of data injestion For more information on Kinesis data retention, please refer to the below URL: http:// docs.aws.amazon.com/ streams/ latest/ dev/ kinesis-extended-retention.html

You are working for an Enterprise and have been asked to get a support plan in place from AWS. 1) 24x7 access to support. 2) Access to the full set of Trusted Advisor checks. Which of the following would meet these requirements ensuring that cost is kept at a minimum A. Basic B. Developer C. Business D. Enterprise

C Some of the features of Business support are 1) 24x7 access to customer service, documentation, whitepapers, and support forums 2) Access to full set of Trusted Advisor checks 3) 24x7 access to Cloud Support Engineers via email, chat & phone Option A and B are invalid because they have Access to 6 core Trusted Advisor checks only. And they don't have 24* 7 support Option D is invalid because even though it fulfils all requirements, it is an expensive option and since Business support already covers the requirement, this should be selected , when you are taking cost as an option. For a full comparison of plans, please visit the following URL: https:// aws.amazon.com/ premiumsupport/ compare-plans/

You are running an instance store based instance. You shutdown and then start the instance. You then notice that the data which you have saved earlier is no longer available. What might be the cause of this? Choose the correct answer from the options below A. The volume was not big enough to handle all of the processing data B. The EC2 instance was using EBS backed root volumes, which are ephemeral and only live for the life of the instance C. The EC2 instance was using instance store volumes, which are ephemeral and only live for the life of the instance D. The instance might have been compromised

C The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances: The underlying disk drive fails The instance stops The instance terminates

You have written a CloudFormation template that creates 1 elastic load balancer fronting 2 EC2 instances. Which section of the template should you edit so that the DNS of the load balancer is returned upon creation of the stack? A. Resources B. Parameters C. Outputs D. Mappings

C The example shows a simple CloudFormation template. It creates an EC2 instance based on the AMI -ami-d6f32ab5. When the instance is created, it will output the AZ in which it is created. { "Resources": { "MyEC2Instance": { "Type": "AWS:: EC2:: Instance", "Properties": { "ImageId": "ami-d6f32ab5" } } }, "Outputs": { "Availability": { "Description": "The Instance ID", "Value": { "Fn:: GetAtt" : [ "MyEC2Instance", "AvailabilityZone" ]} } } }

Which of the following is incorrect with regards to Private IP addresses? A. In Amazon EC2 classic, the private IP addresses are only returned to Amazon EC2 when the instance is stopped or terminated B. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped. C. In Amazon VPC, an instance does NOT retain its private IP addresses when the instance is stopped. D. In Amazon EC2 classic, the private IP address is associated exclusively with the instance for its lifetime

C The following is true with regards to Private IP addressing For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated. For instances launched in EC2-Classic, we release the private IPv4 address when the instance is stopped or terminated. If you restart your stopped instance, it receives a new private IPv4 address For more information on IP addressing, please refer to the below link: http:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ using-instance-addressing.html

You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this? A. By using Ipconfig for windows or Ifconfig for Linux. B. By using a cloud watch metric. C. Using a Curl or Get Command to get the latest meta-data from http:// 169.254.169.254/ latest/ meta-data/ D. Using a Curl or Get Command to get the latest user-data from http:// 169.254.169.254/ latest/ user-data/

C To get the private and public IP addresses, you can run the following commands on the running instance http:// 169.254.169.254/ latest/ meta-data/ local-ipv4 http:// 169.254.169.254/ latest/ meta-data/ public-ipv4 Option A is partially correct , but is an overhead when you already have the service running in AWS. Option B is incorrect, because you cannot get the IP address from the cloudwatch metric. Option D is incorrect, because user-data cannot get the IP addresses For more information on instance metadata , please refer to the below URL: http:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ ec2-instance-metadata.html

As part of your application architecture requirements, the company you are working for has requested the ability to run analytics against all combined log files from the Elastic Load Balancer. Which services are used together to collect logs and process log file analysis in an AWS environment? Choose the correct answer from the options given below A. Amazon S3 for storing the ELB log files and EC2 for processing the log files in analysis B. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts C. Amazon S3 for storing ELB log files and Amazon EMR for processing the log files in analysis D. Amazon EC2 for storing and processing the log files

C You can use Amazon EMR for processing the jobs Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

A VPC has been setup with public subnet and an internet gateway. You setup and EC2 instance with a public IP. But you are still not able to connect to it via the Internet. You can see that the right Security groups are in place. What should you do to ensure you can connect to the EC2 instance from the internet A. Set an Elastic IP Address to the EC2 instance B. Set a Secondary Private IP Address to the EC2 instance C. Ensure the right route entry is there in the Route table D. There must be some issue in the EC2 instance. Check the system logs.

C You have to ensure that the Route table has an entry to the Internet gateway because this is required for instances to communicate over the internet. The diagram shows the configuration of the public subnet in a VPC. Option A is wrong because you already have a public IP Assigned to the instance, so this should be enough to connect to the Internet. Option B is wrong because private IP's cannot be access from the internet Option D is wrong because the Route table is what is causing the issue and not the system

What does the following command do with respect to the Amazon EC2 security groups? revoke-security-group-ingress A. Removes one or more security groups from a rule. B. Removes one or more security groups from an Amazon EC2 instance. C. Removes one or more rules from a security group.

C "Removes one or more ingress rules from a security group. The values that you specify in the revoke request (for example, ports) must match the existing rule's values for the rule to be removed. Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code.

There is a requirement by a company that does online credit card processing to have a secure application environment on AWS. They are trying to decide on whether to use KMS or CloudHSM. Which of the following statements is right when it comes to CloudHSM and KMS. Choose the correct answer from the options given below A. It probably doesn't matter as they both do the same thing B. AWS CloudHSM does not support the processing, storage, and transmission of credit card data by a merchant or service provider, as it has not been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS); hence, you will need to use KMS C. KMS is probably adequate unless additional protection is necessary for some applications and data that are subject to strict contractual or regulatory requirements for managing cryptographic keys, then HSM should be used D. AWS CloudHSM should be always be used for any payment transactions

C AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys. This is sufficient if you the basic needs of managing keys for security. For more information on KMS, please refer to the below link https://aws.amazon.com/kms/ For a higher requirement on security one can use CloudHSM. The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM

A customer wants to apply a group of database specific settings to their Relational Database Instances in their AWS acccount. Which of the following options can be used to apply the settings in one go for all of the Relational database instances A. Security Groups B. NACL Groups C. Parameter Groups D. IAM Roles.

C DB Parameter Groups are used to assign specific settings which can be applied to a set of RDS instances in aws. In your RDS, when you go to Parameter Groups, you can create a new parameter group. In the parameter group itself, you have a lot of database related settings that can be assigned to the database. Option A, B and D are wrong because this is specific to what resources have access to the database.

What are the main benefits of IAM groups? Choose 2 answers from the options below A. Ability to create custom permission policies. B. Allow for EC2 instances to gain access to S3. C. Easier user/ policy management. D. Assign IAM permission policies to more than one user at a time.

C D An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

You are an AWS Administrator for your company. The company currently has a set of AWS resources hosted in a particular region. You have been requested by your supervisor to create a script which could create duplicate resources in another region incase of a disaster. Which of the below AWS services could help fulfil this requirement. A. AWS Elastic Beanstalk B. AWS SQS C. AWS Cloudformation D. AWS SNS

C. AWS Cloudformation AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. Option A is invalid because this is good to get a certain set of defined resources up and running. But it cannot be used to duplicate infrastructure as code. Option B is invalid because this is the Simple Queue Service which is used for sending messages. Option D is invalid because this is the Simple Notification service that is used for sending notifications.

A t2. medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)? A. An Instance store Hardware Virtual Machine AMI B. An Instance store Paravirtual AMI C. An Amazon EBS-backed Hardware Virtual Machine AMI D. An Amazon EBS-backed Paravirtual AMI

C. An Amazon EBS-backed Hardware Virtual Machine AMI The AWS Documentation mentions the below Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance. The below snapshot also shows the support for the T2 Instance family.

You have a set of EC2 Instances launched via Autoscaling. You now want to change the Instance type for the instances that would be launched in the future via Autoscaling. What would you do in such a case A. Change the Launch configuration to reflect the new instance type B. Change the Autoscaling Group and add the new instance type. C. Create a new Launch Configuration with the new instance type and replace the existing Launch configuration attached to the Autoscaling Group. D. Create a new Launch Configuration with the new instance type and add it along with the existing Launch configuration attached to the Autoscaling Group.

C. Create a new Launch Configuration with the new instance type and replace the existing Launch configuration attached to the Autoscaling Group. The AWS Documentation mentions the following When you create an Auto Scaling group, you must specify a launch configuration. You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can't modify a launch configuration after you've created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.

You have a high performance compute application and you need to minimize network latency between EC2 instances as much as possible. What can you do to achieve this? A. Use Elastic Load Balancing to load balance traffic between availability zones B. Create a CloudFront distribution and to cache objects from an S3 bucket at Edge Locations. C. Create a placement group within an Availability Zone and place the EC2 instances within that placement group. D. Deploy your EC2 instances within the same region, but in different subnets and different availability zones so as to maximize redundancy.

C. Create a placement group within an Availability Zone and place the EC2 instances within that placement group. The AWS Documentation mentions the following on placement Groups A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. For more information on placement groups , please refer to the below link: http:// docs.aws.amazon.com/ AWSEC2/ latest/ UserGuide/ placement-groups.html

A company wants to have a 50 Mbps dedicated connection to its AWS resources. Which of the below services can help fulfil this requirement ? A. Virtual Private Gateway B. Virtual Private Connection (VPN) C. Direct Connect D. Internet Gateway

C. Direct Connect AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. For more information on AWS Direct Connect, please visit the below URL: https:// aws.amazon.com/ directconnect/ If you require a port speed less than 1 Gbps, you cannot request a connection using the console. Instead, contact an APN partner, who will create a hosted connection for you. The hosted connection appears in your AWS Direct Connect console, and must be accepted before use.

You work for a company that stores records for a minimum of 10 years. Most of these records will never be accessed but must be made available upon request (within a few hours). What is the most cost-effective storage option? A. S3-IA B. Reduced Redundancy Storage (RRS) C. Glacier D. AWS Import/ Export

C. Glacier Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $ 0.004 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, from a few minutes to several hours.

Which of the following is not a feature provided by Route53? A. Registration of Domain Names B. Routing of internet traffic to domain resources C. Offloading content to cache locations D. Health check of resources

C. Offloading content to cache locations Visit KB Article for features which are available for Route53 hence option A, B and D are valid. Register domain names -Your website needs a name, such as example.com. Amazon Route 53 lets you register a name for your website or web application, known as a domain name. Route internet traffic to the resources for your domain -When a user opens a web browser and enters your domain name in the address bar, Amazon Route 53 helps the Domain Name System (DNS) connect the browser with your website or web application. Check the health of your resources -Amazon Route 53 sends automated requests over the internet to a resource, such as a web server, to verify that it's reachable, available, and functional. You also can choose to receive notifications when a resource becomes unavailable and choose to route internet traffic away from unhealthy resources. Option C is basically a feature provided by the AWS Content Delivery service.

You are a solutions architect working for a large oil and gas company. Your company runs their production environment on AWS and has a custom VPC. The VPC contains 3 subnets, 1 of which is public and the other 2 are private. Inside the public subnet is a fleet of EC2 instances which are the result of an auto scaling group. All EC2 instances are in the same security group. Your company has created a new custom application which connects to mobile devices using a custom port. This application has been rolled out to production and you need to open this port globally to the internet. What steps should you take to do this, and how quickly will the change occur? A. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate on this port after a reboot. B. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate over this port immediately. C. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port immediately. D. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port as soon as the relevant Time To Live (TTL) expires.

C. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port immediately. One can use the Security Group , change the Inbound Rules so that the traffic will be allowed on the custom port. When you make a change to the Security Groups or Network ACL's , they are applied immediately This is clearly given in the AWS documentation

Which of the following when used alongside with the AWS Secure Token service can be used to provide a single sign-on experience for existing users who are part of an organization using on-premise applications A. OpenID Connect B. JSON C. SAML 2.0 D. OAuth

C. SAML 2.0 You can authenticate users in your organization's network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory. Option A and D are incorrect because these are used when you want users to sign in using a well-known third party identity provider such as Login with Amazon, Facebook, Google. Option B is incorrect because this is more of a data exchange protocol.

An application is currently configured on an EC2 instance to process messages in SQS. The queue has been created with the default settings. The application is configured to just read the messages once a week. It has been noticed that not all the messages are being picked by the application. What could be the issue? A. The application is configured to long polling, so some messages are not being picked up B. The application is configured to short polling, so some messages are not being picked up C. Some of the messages have surpassed the retention period defined for the queue D. Some of the messages don't have the right permissions to be picked up by the application

C. Some of the messages have surpassed the retention period defined for the queue When you create an SQS with the default options , the message retention period is 4 days. So if the application is processing the messages just once a week there are chances that messages sent at the start of the week will get deleted before it can be picked up by the application. Option A and B are invalid , because even if you use short or long polling , the application should be able to read the messages eventually. Option D is invalid because you can provide permissions at the queue level.

A company is hosting EC2 instances which focuses on work-loads are on non-production and non-priority batch loads. Also these processes can be interrupted at any time. What is the best pricing model which can be used for EC2 instances in this case? A. Reserved Instances B. On-Demand Instances C. Spot Instances D. Regular Instances

C. Spot Instances Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks Option A is invalid because even though Reserved instances can reduce costs , its best for workloads that would be active for a longer period of time rather than for batch load processes which could last for a shorter period of time. Option B is not right because On-Demand Instances tend to be more expensive than Spot Instances. Option D is invalid because there is no concept of Regular instances in AWS

Which of the following can constitute the term of a 'Golden Image' A. This is the basic AMI which is available in AWS. B. This refers to an instance which has been bootstraped. C. This refers to an AMI that has been constructed from a customized Image. D. This refers to a special type of Linux AMI.

C. This refers to an AMI that has been constructed from a customized Image. You can customize an Amazon EC2 instance and then save its configuration by creating an Amazon Machine Image (AMI). You can launch as many instances from the AMI as you need, and they will all include those customizations that you've made. Each time you want to change your configuration you will need to create a new golden image, so you will need to have a versioning convention to manage your golden images over time Because of the above explanation , all of the remaining options are automatically invalid.

A company has the following EC2 instance configuration. They are trying to connect to the instance from the internet. They have verified the existence of the Internet gateway and the route tables are in place. What could be the issue? A. It's launched in the wrong Availability Zone B. The AMI used to launch the instance cannot be accessed from the internet C. The private IP is wrongly assigned D. There is no Elastic IP Assigned

D An instance must either have a public or Elastic IP in order to be accessible from the internet. A public IP address is reachable from the Internet. You can use public IP addresses for communication between your instances and the Internet. An Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. An Elastic IP address is a public IP address, which is reachable from the Internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.

A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations? A. SAML-based Identity Federation B. Cross-Account Access C. AWS Identity and Access Management roles D. Web Identity Federation

D The AWS Documentation mentions the below With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application.

You are the solution architect for a company. The company has a requirement to deploy an application which will need to have session management in place. Which of the following services can be used to store session data for session management? A. AWS Storage Gateway, Elasticache & ELB B. ELB, Elasticache & RDS C. Cloudwatch, RDS & DynamoDb D. RDS, DynamoDB & Elasticache.

D These options are the best when it comes to storing session data. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases For more information, please visit the below URL: https:// aws.amazon.com/ elasticache/ For DynamoDB, this is also evident from the AWS documentation For more information, please visit the below URL: http:// docs.aws.amazon.com/ gettingstarted/ latest/ awsgsg-intro/ gsg-aws-database.html And by default, in the industry , RDS have been used to store session data. The Elastic Load Balancer, AWS Storage Gateway and Cloudwatch cannot store session data.

When running my DB Instance as a Multi-AZ deployment, can I use the standby for read and write operations? A. Yes B. Only with MSSQL based RDS C. Only for Oracle RDS instances D. No

D This is clearly mentioned in the aws documentation that you cannot use the secondary DB instances for writing purposes. Here is the overview of Multi-AZ RDS Deployments: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don't have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish this? Choose the correct answer from the options given below A. Launch the test and production instances in separate VPC's and use VPC peering B. Create an IAM policy with a condition which allows access to only instances that are used for production or development C. Launch the test and production instances in different Availability Zones and use Multi Factor Authentication D. Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags

D You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an IAM policy.

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? A. Create an A record pointing to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name. C. Create an alias for CNAME record to the load balancer DNS name. D. Create an A record aliased to the load balancer DNS name

D Alias resource record sets are virtual records that work like CNAME records. But they differ from CNAME records in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. As such, unlike CNAME records, alias resource record sets are available to configure a zone apex (also known as a root domain or naked domain) in a dynamic environment. So when you create a hosted zone and having a pointer to the load balancer , you need to mark 'yes' for the Alias option as shown below. Then you can choose the Elastic Load balancer which you have defined in aws.

You have started a new role as a solutions architect for an architectural firm that designs large sky scrapers in the Middle East. Your company hosts large volumes of data and has about 250 TB of data on internal servers. They have decided to store this data on S3 due to the redundancy offered by it. The company currently has a telecoms line of 2Mbps connecting their head office to the internet. What method should they use to import this data on to S3 in the fastest manner possible? A. Upload it directly to S3 B. Purchase and AWS Direct connect and transfer the data over that once it is installed. C. AWS Data pipeline D. AWS Snowball

D The AWS Documentation mentions the following Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement? A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC. B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere. C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses. D. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from corporate IP addresses.

D The bastion host should be in a public subnet with either a public or elastic IP and only allow RDP access from one IP from the corporate network. A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. In AWS, A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. This is a security practice adopted by many organization to secure the assets in their private subnets.

Your company is concerned with EBS volume backup on Amazon EC2 and wants to ensure they have proper backups and that the data is durable. What solution would you implement and why? Choose the correct answer from the options below A. Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway B. Write a cronjob on the server that compresses the data that needs to be backed up using gzip compression, then use AWS CLI to copy the data into an S3 bucket for durability C. Use a lifecycle policy to back up EBS volumes stored on Amazon S3 for durability D. Write a cronjob that uses the AWS CLI to take a snapshot of production EBS volumes. The data is durable because EBS snapshots are stored on the Amazon S3 standard storage class

D You can take snapshots of EBS volumes and to automate the process you can use the CLI. The snapshots are automatically stored on S3 for durability.

Which of the following features can be used to capture information for outgoing and incoming IP traffic from network interfaces in a VPC. A. AWS Cloudwatch B. AWS EC2 C. AWS SQS D. AWS VPC Flow Logs

D. AWS VPC Flow Logs VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

You keep on getting an error while trying to attach an Internet Gateway to a VPC. What is the most likely cause of the error? A. You need to have a customer gateway defined first before attaching an internet gateway B. You need to have a public subnet defined first before attaching an internet gateway C. You need to have a private subnet defined first before attaching an internet gateway D. An Internet gateway is already attached to the VPC

D. An Internet gateway is already attached to the VPC You can only have one internet gateway attached to your VPC at one time, hence the error must be coming because there is already an internet gateway attached.

There are multiple issues reported from an EC2 instance hence it is required to analyze the logs files. What can be used in AWS to store and analyze the log files? A. SQS B. S3 C. Cloudtrail D. Cloudwatch Logs

D. Cloudwatch Logs You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log. To enable Cloudwatch logs , follow the below steps Step 1) Go to the Cloudwatch section and click on Logs Step 2) Once you create a log group, you can then configure your EC2 instance to send logs to this Log Group.

You have an EC2 Instance in a particular region. This EC2 Instance has a preconfigured software running on it. You have been requested to create a disaster recovery solution in case the instance in the region fails. Which of the following is the best solution. A. Create a duplicate EC2 Instance in another AZ. Keep it in the shutdown state. When required, bring it back up. B. Backup the EBS data volume. If the instance fails, bring up a new EC2 instance and attach the volume. C. Store the EC2 data on S3. If the instance fails, bring up a new EC2 instance and restore the data from S3. D. Create an AMI of the EC2 Instance and copy it to another region

D. Create an AMI of the EC2 Instance and copy it to another region You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by default, copied to an identical but distinct target snapshot. Option A is invalid , because it is a maintenance overhead to maintain another non-running instance Option B is invalid , because the pre-configured software could have settings on the root volume Option C is invalid because this is a long and inefficient way to restore a failed instance

A company wants to make use of serverless code. Which service in AWS provides such a facility? A. SQS B. Cloudfront C. EC2 D. Lambda

D. Lambda AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume -there is no charge when your code is not running.

Which of the following feature can be used to move Objects from S3 standard storage to Amazon Glacier A. S3 Events B. Object Versioning C. Storage Class D. Lifecycle policies

D. Lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions -In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Expiration actions -In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

While performing status checks on your volume in AWS , you can see that the volume check has a status of "insufficient-data". What can you derive from this status check A. All checks have passed B. A particular check has failed only C. All checks have failed D. The check on the volume is still in progress.

D. The check on the volume is still in progress. Volume status checks enable you to better understand, track, and manage potential inconsistencies in the data on an Amazon EBS volume. They are designed to provide you with the information that you need to determine whether your Amazon EBS volumes are impaired, and to help you control how a potentially inconsistent volume is handled. If the status is insufficient-data, the checks may still be in progress on the volume. Option A is incorrect because if all checks have passed, then the status of the volume is OK. Option B and C are incorrect because if a check fails, then the status of the volume is impaired

Your company VPC has a need to communicate with another company VPC within the same AWS region. What can be used from AWS to interface between the two VPC? A. VPC Connection B. VPN Connection C. Direct Connect D. VPC Peering

D. VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region The below diagram shows an example of VPC peering. Now please note that VPC B cannot communicate to VPC C because there is no peering between them.


संबंधित स्टडी सेट्स

Ecce Romani Exercise IVe Vocabulary

View Set

Introduction to Curriculum, Instruction, and Assessment

View Set

Washington State portion (real estate exam)

View Set

AP World History 1750-1900 Vocab

View Set

Chapter 14 PrepU: Management of Anger, Aggression and Violence

View Set