Certified Solutions Architect - Associate

¡Supera tus tareas y exámenes ahora con Quizwiz!

You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use? Please select : A. Amazon DynamoDB B. Amazon Redshift C. Amazon Kinesis D. Amazon Simple Queue Service

Answer: A Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, so they don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. DynamoDB is durable, scalable, and highly available data store in aws and can be used for real time tabulation. Option B is wrong because it is a petabyte storage engine and is used in cases where there is a requirement for an OLAP solution. Option C is wrong because it is used for processing streams and not for storage. Option D is wrong because it is a de-coupling solution.

What are bastion hosts? Please select: A. They are instances in the public subnet which are used as a jump server to resources within other subnets B. They are instances in the private subnet which are used as a jump server to resources within other subnets C. They are instances in the public subnet which are used to host web resources that can accessed by users D. They are instances in the private subnet which are used to host web resources that can accessed by users

Answer: A As the number of EC2 instances in your AWS Environment grows, so too does the number of administrative access points to those instances. Depending on where your administrators connect to your instances from, you may consider enforcing stronger network-based access controls. A best practice in this area is to use a bastion. A bastion is a special purpose server instance that is designed to be the primary access point from the internet and acts as a proxy to your other EC2 instances. Option B is invalid because bastion hosts need to be in the public subnet Option C and D are invalid because bastion hosts are not used to host web resources

When reviewing the Auto Scaling events, it is noticed that an application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity? Choose the correct answer from the options below Please select: A. Change the scale down CloudWatch metric to a higher threshold B. Increase the instance type in the launch configuration C. Increase the base number of Auto Scaling instances for the Auto Scaling group D. Add provisioned IOPS to the instances

Answer: A If the threshold for the scale down is too low then the instances will keep on scaling down rapidly. Hence it is best to keep on optimal threshold for your metrics defined for CloudWatch

When reviewing the Auto Scaling events, it is noticed that an application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity? Choose the correct answer from the options below Please select : A. Change the scale down CloudWatch metric to a higher threshold B. Increase the instance type in the launch configuration C. Increase the base number of Auto Scaling instances for the Auto Scaling group D. Add provisioned IOPS to the instances

Answer: A If the threshold for the scale down is too low then the instances will keep on scaling down rapidly. Hence it is best to keep on optimal threshold for your metrics defined for Cloudwatch

As a system administrator, you have been requested to implement the best practices for using Autoscaling, SQS and EC2. Which of the following items is not a best practice? Please select : A. Use the same AMI across all regions B. Utilize AutoScaling to deploy new EC2 instances if the SQS queue grows too large C. Utilize CloudWatch alarms to alert when the number of messages in the SQS queue grows too large D. Utilize an IAM role to grant EC2 instances permission to modify the SQS queue

Answer: A The AMI's differ from the region to region, hence this is a not a required practice. You need to copy the AMI from region to region if you want to implement disaster recovery as a best practice.

You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion? Please select : A. Amazon Kinesis B. AWS Data Pipeline C. Amazon AppStream D. Amazon Simple Queue Service

Answer: A Use Amazon Kinesis Streams to collect and process large streams of data records in real time. You'll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services

When you put objects in Amazon S3, what is the indication that an object was successfully stored? Please select : A. HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful. B. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted. C. A success code is inserted into the S3 object metadata. D. Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.

Answer: A When an object is placed in S3, it is done via HTTP via a POST or PUT object request. When a success occurs, you will get a 200 HTTP response. But since a 200 Response can also contain error information, a check of the MD5 checksum confirms on whether the request was a success or not.

Regarding the attaching of ENI to an instance, what does 'warm attach' refer to? Please select : A. Attaching an ENI to an instance when it is stopped. B. Attaching an ENI to an instance during the launch process C. Attaching an ENI to an instance when it is running

Answer: A You can attach an elastic network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach). An elastic network interface (ENI) is a virtual network interface that you can attach to an instance in a VPC. An Elastic network interface can have the following * A primary private IP address. * One or more secondary private IP addresses. * One Elastic IP address per private IP address. * One public IP address, which can be auto-assigned to the elastic network interface for eth0 when you launch an instance. For more information, see Public IP Addresses for Network Interfaces. * One or more security groups. * A MAC address. * A source/destination check flag. * A description.

Which of the following criteria are some of which must be met when attached an EC2 instance to an existing AutoScaling Group? Select 3 options. Please select: A. The instance is in the running state B. The AMI used to launch the instance must still exist C. The instance is not a member of another Auto Scaling Group D. They should have the same private key

Answer: A, B, C Auto Scaling provides you with an option to enable Auto Scaling for one or more EC2 instances by attaching them to your existing Auto Scaling group. After the instances are attached, they become a part of the Auto Scaling group. The instance that you want to attach must meet the following criteria: * The instance is in the running state * The AMI used to launch the instance must still exist * The instance is not a member of another Auto Scaling Group * The instance is in the same Availability Zone as the Auto Scaling group * If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the Application Load Balancer must both be in the same VPC

Which of the following databases support the read replica feature? Please select: A. MySQL B. Maria DB C. PostgreSQL D. Oracle

Answer: A, B, and C (MySQL, MariaDB, PostgreSQL) Read Replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL. When you create a read replica, you specify an existing DB Instance as the source. Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL, MariaDB and PostgreSQL, Amazon RDS uses those engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.

Which of the following are characteristics of a standard reserved instance? Choose 3 answers Please select : A. It can be migrated across Availability Zones B. It is specific to an Amazon Machine Image (AMI) C. It can be applied to instances launched by Auto Scaling D. It is specific to an instance Type E. It can be used to lower Total Cost of Ownership (TCO) of a system

Answer: A, C and E. Option A is correct, because you can migrate instances between AZ's. Option D is incorrect because it is specific to instance family however instance type can be changed. Also when you create a reserved instance, you can see the Instance Type as an option. Option E is correct, because reserved instances can be used to lower costs. Reserved Instances provide you with a discount on usage of EC2 instances, and a capacity reservation when they are applied to a specific Availability Zone, giving you additional confidence that you will be able to launch the instances you have reserved when you need them.

An instance can have many states that perform part of its lifecycle. Choose 3 options which are correct states of an instance lifecycle. Pleas select: a. rebooting b. pending c. running d. Shutdown

Answer: A,B, and C

Which of the below elements can you manage in the IAM dashboard? Choose 3 answers from the options given below Pleas select: A. Users B. Encryption Keys C. Cost Allocation Reports D. Policies

Answer: A,B, and D When you go to your IAM dashboard, below are the set of elements which can be configured: Dashboard Groups Users Roles Policies Identity Providers Account Settings Credential Report

How many types of block devices does Amazon EC2 Support? Choose one answer from the options below: Please select: A. 2 B. 3 C. 4 D. 1

Answer: A. 2 Amazon EC2 supports two types of block devices: 1) Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance) 2) EBS volumes (remote storage devices)

You are creating a Provisioned IOPS volume in AWS. The size of the volume is 8 GiB. Which of the following are the possible values that can put for the IOPS of the volume. Please select: A. 400 B. 500 C. 600 D. 1000

Answer: A. 400 The maximum ratio of the IOPS to volume size is 50:1. So if the volume size is 8 GIB, the maximum IOPS of the volume can be 400. If you go beyond this value you will get an error

A customer is looking for a hybrid cloud solution and learns about AWS Storage Gateway. What is the main use case of AWS Storage Gateway? Please select: A. It allows to integrate on-premises IT environments with Cloud Storage B. A direct encrypted connection to S3 C. It's a backup solution that provides an on-premises Cloud storage D. It provides an encrypted SSL endpoint for backups in the Cloud

Answer: A. It allows to integrate on-premises IT environments with Cloud Storage Option B is wrong because it is not an encrypted solution to S3 Option C is wrong because you can use S3 as a backup solution Option D is wrong because the SSL endpoint can be achieved via S3 The AWS Storage Gateway's software appliance is available for download as a Virtual Machine image that you install on a host in your data center. Once you've installed your gateway and associated it with your aWS Account through our activation process, you can use the AWS Console to create either gateway-cached volumes, gateway-stored volumes, or a gateway-virtual table library, which can be mounted as iSCSI devices by your on-premises applications. You have primarily 2 types of volumes: 1) Gateway-cached volumes allow you to utilize S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data 2) Gateway-stored volumes store your primary data locally, while asynchronously backing up that data to AWS

Which of the following is mandatory when defining a cloudformation template? Please select: A. Resources B. Parameters C. Outputs D. Mappings

Answer: A. Resources Resources - Specifies the stack resources and their properties, such as an Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket. You can refer to resources in the Resources and Outputs sections of the template

What can be used from AWS to import existing Virtual Machines Images into AWS? Please select: A. VM Import/Export B. AWS Import/Export C. AWS Storage Gateway D. This is not possible in AWS

Answer: A. VM Import/Export VM Import/Export enables customers to import Virtual Machines (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.

What is the purpose of an SWF decision task? Choose the correct answer from the options below Please select : A. It tells the worker to perform a function. B. It tells the decider the state of the work flow execution. C. It defines all the activities in the workflow. D. It represents a single task in the workflow

Answer: B A decider is an implementation of the coordination logic of your workflow type that runs during the execution of your workflow. You can run multiple deciders for a single workflow type. Because the execution state for a workflow execution is stored in its workflow history, deciders can be stateless. Amazon SWF maintains the workflow execution history and provides it to a decider with each decision task

What is the service provided by aws that allows developers to easily deploy and manage applications on the cloud? Please select : A. CloudFormation B. Elastic Beanstalk C. Opswork D. Container service

Answer: B AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance? Please select : A. Use multi-part upload. B. Add a random prefix to the key names. C. Amazon S3 will automatically manage performance at this scale. D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

Answer: B If your workload in an Amazon S3 bucket routinely exceeds 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second then you need to perform some guidelines for your S3 bucket. One way to add a hash prefix key to the key name - One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key name.

A customer wants to track access to their S3 buckets and also use this information for their internal security and access audits. Which of the following will meet the customer requirement? Please select: A. Enable AWS CloudTrail to audit all S3 bucket access B. Enable server access logging for all required Amazon S3 buckets C. Enable the Requestor Pays option to track access via AWS Billing D. Enable S3 event notifications for Put and Post

Answer: B In order to track requests for access to your bucket, you can enable access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. Access log information can be useful in security and access audits.

A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement? Please select : A. Enable AWS CloudTrail to audit all Amazon S3 bucket access. B. Enable server access logging for all required Amazon S3 buckets. C. Enable the Requester Pays option to track access via AWS Billing D. Enable Amazon S3 event notifications for Put and Post.

Answer: B Logging provides a way to get detailed access logs delivered to a bucket you choose. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. Since you don't want logging of every aws service, there is no need to Cloudtrail, hence you can neglect Option A. Option C is not valid because that refers to billing. Option D is invalid because event notifications is different from logging.

In order to establish a successful site-to-site VPN connection from your on-premise network to the VPC, which of the following needs to be configured outside of the VPC? Choose the correct answer from the options below. Please select: A. The main route table to route traffic through a NAT instance B. A public IP address on the customer gateway for the on-premise network C. A dedicated NAT instance in a public subnet D. An Elastic IP address to the Virtual Private Gateway

Answer: B On the customer side gateway you need to have a public IP address which can be addressed by the VPN connection

You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access? Please select : A. Deploy a NAT instance into the public subnet. B. Assign an Elastic IP address to the fourth instance. C. Configure a publically routable IP Address in the host OS of the fourth instance. D. Modify the routing table for the public subnet.

Answer: B Option A is wrong because it already mentioned that your instances are in a public subnet. Only when your instances are in a private Subnet, then only you have to configure a NAT instance. Option C is wrong because the public IP address has to be configured in AWS and not on the EC2 instance. Option D is wrong because if the routing table was wrong then you would have an issue with the other 3 instances as well. And the question says that there is no issue with the other instances.

There is a requirement for a user to modify the configuration of one of your Elastic Load Balancers (ELB). This access is just required one time only. Which of the following choices would be the best way to allow this access? Please select : A. Open up whichever port ELB uses in a security group and give the user access to that security group via a policy B. Create an IAM Role and attach a policy allowing modification access to the ELB C. Create a new IAM user who only has access to the ELB resources and delete that user when the work is completed. D. Give them temporary access to the root account for 12 hours only and change the password once the activity is completed

Answer: B The best practice for IAM is to create roles which has specific access to an AWS service and then give the user permission to the AWS service via the role . To get the role in place , follow the below steps: Step 1) Create a role which has the required ELB access Step 2) You need to provide permissions to the underlying EC2 instances in the Elastic Load Balancer

You have an application running on an Amazon Elastic Compute Cloud instance that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application? Please select : A. Enable enhanced networking B. Use Amazon S3 multipart upload C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency. D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

Answer: B When uploading large videos it's always better to make use of aws multi part file upload. So if you are using the Multi Upload option for S3, then you can resume on failure. Below are the advantage of Multi Part upload: 1) Improved throughput—you can upload parts in parallel to improve throughput. 2) Quick recovery from any network issues—smaller part size minimizes the impact of restarting a failed upload due to a network error. 3) Pause and resume object uploads—you can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload. 4) Begin an upload before you know the final object size—you can upload an object as you are creating it.

When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Please select: A. Amazon DynamoDB B. Amazon Elastic Compute Cloud (EC2) C. Amazon Elastic Load Balancing D. Amazon Simple Storage Service (S3)

Answer: B and C Having an ELB in front of EC2 instances provides high availability. You have the ELB placed in front of the instances and the instances are placed in different AZs.

You have several AWS reserved instances in your account. They have been running for sometime, but now need to be shutdown since they are no longer required. The data is still required for future purposes. Which of the below possible 2 steps can be taken. Please select: A. Convert the instance to on-demand instances B. Sell the instances on the AWS Reserved Instance Marketplace C. Take snapshots of the EBS volumes and terminate the instances D. Convert the instance to spot instances

Answer: B and C The Reserved Instance Marketplace is a platform that supports the sale of third-party and AWS customers' unused Standard Reserved Instances, which vary in term lenghts and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity Since the data is still required, its better to take snapshots of the existing volumes and then terminate the instances Options A and D are invalid because you cannot convert Reserved Instances to either on-demand or Spot Instances

You have an EC2 instance in a particular region. The EC2 instance has a pre-configured software running on it. You have been requested to create a disaster recovery solution in case the instance in the region fails. Which of the following is the best solution: Please select: A. Create a duplicate EC2 instance in another AZ. Keep it in the shutdown state. When required, bring it back up B. Backup the EBS data volume. If the instance fails, bring up a new EC2 instance and attach the volume C. Store the EC2 data on S3. IF the instance fails, bring up a new EC2 instance and restore the data from S3 D. Create an AMI of the EC2 instance and copy it to another region

Answer: D. Create an AMI of the EC2 instance and copy it to another region You can copy an AMI within or across an AWS region using the AWS Management Console, the CLI or SDKs, or the EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and Instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an EBS-babcked AMI, each of its backing snapshots is by default copied to an identical but distinct target snapshot.

A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario? Please select: A. SQS guarantees the order of the messages B. SQL synchronously provides transcoding output C. SQS check the health of the worker instances D. SQS helps to facilitate horizontal scaling of encoding tasks

Answer: D. SQS helps to facilitate horizontal scaling of encoding tasks

A company has the following EC2 instance configuration. They are trying to connect to the instance from the internet. They have verified the existence of the Internet gateway and the route tables are in place. What could be the issue? Please Select: A. It's launched in the wrong AZ B. The AMI used to launch the instance cannot be accessed from the internet C. The private IP is wrongly assigned D. There is no Elastic IP Assigned

Answer: D. There is no Elastic IP Assigned An instance must either have a public or Elastic IP in order to be accessible from the internet. A public IP address is reachable from the Internet. You can use public IP addresses for communication between your instances and the Internet. An Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failur of an instance or software by rapidly remapping the address to another instance in your account. An Elastic IP Address is a public IP address, which is reachable from the internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the internet.

A photo-sharing service stores pictures in S3 and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should use use for the S3 operations? Please Select: A. SAML-based Identity Federation B. Cross-Account Access C. AWS Identity and Access Management Roles D. Web Identity Federation

Answer: D. Web Identity Federation With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider - such as login with Amazon, Facebook, Google, or any other OpenID connect compatible identity provider, receive and authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS Account. Using an Identity Provider helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application.

Amazon Redshift uses which block size for its columnar storage? Please select: A. 2 KB B. 8 KB C. 16 KB D. 32 KB E. 1024 KB

Answer: E. 1024 KB Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Typical database block sizes range from 2 KB to 32 KB. Amazon Redshift uses a block size of 1 MB, which is more efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of query execution.

Which of the following is not supported by AWS Import/Export? Please select: A. Import to Amazon S3 B. Export from Amazon S3 C. Import to Amazon EBS D. Import to Amazon Glacier E. Export from Amazon Glacier

Answer: E. Export from Amazon Glacier AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes or less of data to import into Amazon Simple Storage Service or Amazon Elastic Block Store (EBS). You can also export data from Amazon S3 with AWS Import/Export

A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose 2 answers from the options given below: Please Select: A. Use AWS Consolidated Billing and disable AWS root account access for the child accounts B. Enable IAM cross-account access for all corporate IT administrators in each child account C. Create separate VPCs for each division within the corporate IT AWS account D. Use AWS Consolidated Billing by creating AWS Organizations to link the divisions' accounts to a parent corporate account. E. Write all child AWS CloudTrail

Answer: B and D Since the resources need to be separated and a separate governance model is required for each section of resources, then it's better to have a separate AWS account for each division. Each division's AWS account can sign up for consolidated billing to the main corporate account by creating AWS Organizations. The IT administrators can then be granted access via cross account role access.

A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? (Choose two) Please select: A. Establish a hardware VPN over the internet between VPC-2 and the on-premises network B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network C. Establish a new AWS Direct connection and private virtual interface in the same region as VPC-2 D. Establish a new Direct Connect connection and private virtual interface in a different AWS region than VPC-1 E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

Answer: B and E Having a VPN connection is considered as a backup to a Direct Connect connection. One can also have another Direct Connect connection, so that if one goes down, the other one would still be active. This needs to be in the same region as VPC-1.

What are some of the common causes why you cannot connect to a DB instance on AWS? Select all that apply. Please select: A. There is a read replica being created, hence you cannot connect B. The DB is still being created C. The local firewall is stopping the communication traffic D. The security groups for the DB are not properly configured

Answer: B, C, and D

Which of the following are use cases for DynamoDB? Choose 3 answers Please select: A. Storing BLOB data B. Managing Web Sessions C. Storing JSON documents D. Storing metadata for S3 objects E. Running relational joins and complex updates F. Storing large amounts of infrequently accessed data

Answer: B, C, and D Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400kb. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. DynamoDB is a good choice to store the metadata for a BLOB, such as name, date created, owner, etc...The Binary Large OBject itself would be stored in S3.

For which of the following databases does Amazon RDS provides high availability and failover support using Amazon's failover technology for DB instances using Multi-AZ deployments. Select 3 options. Please select: A. SQL Server B. MySQL C. Oracle D. MariaDB

Answer: B, C, and D (MySQL, Oracle, MariaDB) Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Multi-AZ deployments for Oracle, PostreSQL, MySQL, and MariaDB DB instances uses Amazon technology while SQL Server DB instances use SQL Server Mirroring.

You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers Please select: A. Amazon CloudWatch B. Amazon Relational Database Service (RDS) C. Elastic Load Balancing D. Amazon ElastiCache E. AWS Storage Gateway F. Amazon DynamoDB

Answer: B, D and F (RDS, ElastiCache, DynamoDB) Relational databases have always been a source for storing session data. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store of cache in the cloud. the service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases

In CloudWatch what is the retention period for a one minute data point. Choose the right answer from the options given below. Please select: A. 10 days B. 15 days C. 1 month D. 1 year

Answer: B. 15 days CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. CloudWatch can monitor AWS resources such as EC2 instances, DynamoDB tables, and RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. CloudWatch metrics now supports the following three retention schedules: * 1 minute datapoints are available for 15 days * 5 minute datapoints are available for 63 days * 1 hour datapoints are available for 455 days

What is the maximum object size allowed for Multi-part file upload for S3. Please select : A. 10 TB B. 5 TB C. 1 TB D. 5 GB

Answer: B. 5 TB

What is the amount of temp space allocated to you when using Lambda functions per invocation? Please select: A. 256 MB B. 512 MB C. 2 GiB D. 16 GiB

Answer: B. 512 MB

Which of the following is an example of synchronous replication which occurs in the AWS service? Please select: A. AWS RDS Read Replica's for MySQL, MariaDB, and PostreSQL B. AWS Multi-AZ RDS C. Redis engine for ElastiCache replication D. AWS RDS Read Replica's for Oracle

Answer: B. AWS Multi-AZ RDS RDS Multi-AZ deployments provide enhanced availability and durability for Database Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different AZ.

You work for a company who are deploying a hybrid cloud approach. Their legacy servers will remain on premise within their own datacenter however they will need to be able to communicate to the AWS environment over a site to site VPN connection. What do you need to do to establish the VPN connection? Please select: A. Connect to the environment using AWS Direct Connect B. Assign a static routable address to the customer gateway C. Create a dedicated NAT and deploy this to the public subnet D. Update your route table to add a route for the NAT to 0.0.0.0/0

Answer: B. Assign a static routable address to the customer gateway

In the event of an unplanned outage of your primary DB, AWS RDS automatically switches over to the secondary. In such a case which record in Route53 is changed? Select one answer from the options given below. Please Select: A. DNAME B. CNAME C. TXT D. MX

Answer: B. CNAME Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary

When working with API gateways in AWS, what is the type of endpoints that are exposed? Please select: A. HTTP B. HTTPS C. JSON D. XML

Answer: B. HTTPS All of the endpoitns created with the API gateway are of HTTPS. Option A is incorrect because API Gateway does not support unencrypted (HTTP) endpoints Option C and D are invalid because API Gateway expose HTTPS endpoints only

In VPC's with private and public subnets, the web servers should ideally be launched in which of the following? Please select: A. Public Subnet B. Private Subnets C. Either of them D. they should be launched outside of the VPC

Answer: B. Private Subnets In an ideal highly-available and fault tolerant environment I would expect the web servers (EC2 Instances most likely) to be in a private subnet and using a ELB which is in the public subnet allowing for the access from the outside world, only on whatever port the application is allowed through the ELB. With AWS, when talking ideally it should always be highly-available and fault tolerant with the best security possible.

What is an AWS service which can help protect web applications from common security threats from the outside world? Choose one answer from the options below Please select: A. NAT B. WAF C. SQS D. SES

Answer: B. WAF Option A is wrong because this is used to relay information from private subnets to the internet Option C is wrong because this is used as a queuing service in aws Option D is wrong because this is used as an emailing service in aws AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules

Which technique can be used to integrate AWS IAM with an on-premis LDAP (Lighweight Directory Access Protocol) directory service? Please select: A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials D. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated E. Use the LDAP credentials to restrict a group of users from launching EC2 instance types

Answer: C

What service from AWS can help manage the budgets for all resources in AWS? Choose one answer from the options below. Please select: A. Cost Explorer B. Cost Allocation Tags C. AWS Budgets D. Payment History

Answer: C. AWS Budgets A budget is a way to plan your usage and your costs (also known as spend data) and to track how close your ussage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you with a quick way to see your usage-to-date and current estimated charges from AWS, and to see how much your predicted usage accrues in charges by the end of the month. Budgets also compare the current estimated usage and charges to the amount that you indicated that you want to use or spend, and lets you see how much of your budget has been used. AWS updates your budget status several times a day. Budgets track your unblended costs, subscriptions, and refunds You can create budgets for different types of usage and different types of costs. For example, you can create a budget to see how many EC2 hours you have used, or how many GB you have stored in an S3 bucket. You can also create a budget to see how much you are spending on a particular service, or how often you call a particular API operation. Budgets use the same data filters as Cost Explorer

You want to ensure that you keep a check on the Active Volumes, Active Snapshots and Elastic IP addresses you use so that you don't go beyond the service limit. Which of the below services can help in this regard? Please select: A. AWS CloudWatch B. AWS EC2 C. AWS Trusted Advisor D. AWS SNS

Answer: C. AWS Trusted Advisor An online resource to help reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices.

A t2.medium EC2 instance type must be launched with what type of AMI? Please select: A. An instance store Hardware Virtual Machine AMI B. An instance store Paravirtual AMI C. An Amazon EBS-backed Hardware Virtual Machine AMI D. An Amazon EBS-backed Paravirtual AMI

Answer: C. An Amazon EBS-backed Hardware Virtual Machine AMI

You are working for an Enterprise and have been asked to get a support plan in place from AWS. 1) 24x7 access to support 2) Access to the full set of Trusted Advisor checks Which of the following would meet these requiremetns ensuring that cost is kept at a minimum? Please select: A. Basic B. Developer C. Business D. Enterprise

Answer: C. Business

In AWS what is used for encrypting and decrypting login information to EC2 instances. Please select: A. Templates B. AMI's C. Key Pairs D. None of the Above

Answer: C. Key Pairs Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public-key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.

Which of the following when used alongside with the AWS Secure Token service can be used to provide a single sign-on experience for existing users who are part of an organization using on-premise applications. Please select: A. OpenID Connect B. JSON C. SAML 2.0 D. OAuth

Answer: C. SAML 2.0 You can authenticate users in your organization's network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like SAML 2.0 with which you can use Microsoft AD to leverage your Microsoft Active Directory. Option A and D are incorrect because these are used when you want users to sign in using a well-known third party identity provider such as Login with Amazon, Facebook, Google. Option B is incorrect because this is more of a data exchange protocol.

An application is currently configured on an EC2 instance to process messages in SQS. The queue has been created with the default settings. The application is configured to just read the messages once a week. It has been noticed that not all the messages are being picked by the application. What could be the issue? Please select: A. The application is configured to long polling, so some messages are not being picked up B. The application is configured to short polling, so some messages are not being picked up C. Some of the messages have surpassed the retention period defined for the queue D. Some of the messages don't have the right permissions to be picked up by the application

Answer: C. Some of the messages have surpassed the retention period defined for the queue When you create an SQS with the default options, the message retention period is 4 days. So if the application is processing the messages just once a week there are chances that messages sent at the start of the week will get deleted before it can be picked up by the application.

You have created your own VPC and subnet in AWS. You have launched an instance in that subnet. You have noticed that the instance is not receiving a DNS name. Which of the below options could be a valid reason for this issue. Please select: A. The CIDR block for the VPC is invalid B. The CIDR block for the subnet is invalid C. The VPC configuration needs to be changed D. The subnet configuration needs to be changed

Answer: C. The VPC configuration needs to be changed If the DNS hostnames option of the VPC is not set to "Yes" then the instances launched in the subnet will not get DNS Names. You can the change the option by choosing your VPC and clicking on "Edit DNS Hostnames" Option A and B are invalid because if the CIDR blocks were invalid then the VPC or subnet would not be created Option D is invalid because the subnet configuration does not have the effect on the DNS hostnames.

Which of the below resources cannot be tagged in AWS? Please select: A. Images B. EBS Volumes C. VCP Endpoint D. VPC

Answer: C. VPC Endpoint

A customer wants to leverage S3 and Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the S3 bucket named "company-backup"? Please select: A. A custom bucket policy limited to the S3 API in the Glacier archive "company-backup" B. A custom bucket policy limited to the S3 API in "company-backup" C. A custom IAM user policy limited to the S3 API for the Glacier archive "company-backup" D. A custom IAM user policy limited to the S3 API in "company-backup"

Answer: D

A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario? Please select : A. SQS guarantees the order of the messages. B. SQS synchronously provides transcoding output. C. SQS checks the health of the worker instances. D. SQS helps to facilitate horizontal scaling of encoding tasks.

Answer: D Now even though SQS does guarantees the order of the messages for FIFO queues, this is still not the reason as to why this is the appropriate reason. The normal reason for using SQS, is for decoupling of systems and helps in horizontal scaling of aws resources. SQS does not either do transcoding output or checks the health of the worker instances. The health of the worker instances can be done via ELB or Cloudwatch.

Which of the following best describes what the CloudHSM has to offer? Choose the correct answer from the options given below Please select : A. An AWS service for generating API keys B. EBS Encryption method C. S3 encryption method D. A dedicated appliance that is used to store security keys

Answer: D The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM.

What is the best definition of an SQS message? Choose an answer from the options below Please select : A. A mobile push notification B. A set of instructions stored in an SQS queue that can be up to 512KB in size C. A notification sent via SNS D. A set of instructions stored in an SQS queue that can be up to 256KB in size

Answer: D The maximum size of an SQS message as given in the AWS documentation is given below: Q: How do I configure the maximum message size for SQS? To configure the maximum message size, use the console or the SetQueueAttributes method to set the MaximumMessageSize attribute. This attribute specifies the limit on bytes than an SQS message can contain. Set this limit to a value between 1KB and 256KB.

To protect S3 data from both accidental deletion and accidental overwriting, you should Please select : A. Enable Multi-Factor Authentication (MFA) protected access B. Disable S3 delete using an IAM bucket policy C. Access S3 data using only signed URLs D. Enable S3 versioning on the bucket

Answer: D To protect objects in S3 from both accidental deletion and accidental overwriting, the methodology adopted by aws is to Enable versioning on the bucket. Versioning allows to store every version of an object , so that if by mistake there is a version deleted , you can recover other versions, because the entire object is not deleted. Enable Multi-Factor Authentication (MFA) protected access on S3 is only used to add an additional security layer to S3. So that users who are authenticated properly before having access to the bucket. But this is not what the question is asking. To enable versioning on S3 , you need to go to the bucket , and in the properties , you can enable versioning.

You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don't have the access to work on the production instances to ensure better security. Using policies, which of hte following would be teh best way to accomplish this? Choose the correct answer from the options given below. Please select: A. Launch the test and production instances in separate VPC's and use VPC peering B. Create an IAM policy with a condition which allows access to only instances that are used for production and development C. Launch the test and production instances in different AZ's and use Multi-Factor Authentication D. Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags

Answer: D You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an IAM policy.

You keep on getting an error while trying to attach an Internet Gateway to a VPC. What is the most likely cause of the error? Please select: A. You need to have a customer gateway defined first before attaching an internet gateway B. You need to have a public subnet defined first before attaching an internet gateway C. You need to have a private subnet defined first before attaching an internet gateway D. An internet gateway is already attached to the VPC

Answer: D. An internet gateway is already attached to the VPC

There are multiple issues reported from an EC2 instance hence it is required to analyze the log files. What can be used in AWS to store and analyze the log files? Please select: A. SQS B. S3 C. CloudTrail D. CloudWatch logs

Answer: D. CloudWatch logs

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Route53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? Please select: A. Create an A record pointing to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name C. Create an alias for CNAME record to the load balancer DNS name D. Create an A record aliased to the load balancer DNS name

Answer: D. Create an A record aliased to the load balancer DNS name Alias resource record sets are virtual records that work like CNAME records. But they differ from CNAME records in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. As such, unlike CNAME records, alias resource record sets are available to configure a zone apex (also known as a root domain or naked domain) in a dynamic environment. So when you create a hosted zone and having a pointer to the load balancer, you need to mark 'yes' for the Alias option as shown below. Then you can choose the ELB which you have defined in AWS.

An Account has an ID of 085566624145. Which of the below mentioned URL's would you provide to the IAM user to log in to aws? Please select : A. https://085566624145.signin.aws.amazon.com/console B. https://signin.085566624145.aws.amazon.com/console C. https://signin.aws.amazon.com/console D. https://aws.amazon.com/console

Answer: A After you create IAM users and passwords for each, users can sign in to the AWS Management Console for your AWS account with a special URL. By default, the sign-in URL for your account includes your account ID. You can create a unique sign-in URL for your account so that the URL includes a name instead of an account ID

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? Please select: A. Multiple Amazon EBS volume with snapshots B. A single Amazon Glacier vault C. A single Amazon S3 bucket D. Multiple Instance stores

Answer: C. A single Amazon S3 bucket

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method would be used to serve content that is stored in S3, but not publicly accessible from S3 directly? Choose the correct answer from the options given below Please select : A. Create an Origin Access Identify (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI B. Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM user. C. Create a S3 bucket policy that lists the CloudFront distribution ID as the principal and the target bucket as the Amazon Resource Name (ARN) D. Add the CloudFront account security group

Answer: A You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it. To require that users access your content through CloudFront URLs, you perform the following tasks: * Create a special CloudFront user called an origin access identity. * Give the origin access identity permission to read the objects in your bucket. * Remove permission for anyone else to use Amazon S3 URLs to read the objects.

An image named photo.jpg has been uploaded to a bucket named examplebucket in the us-east-1 region. Which of the below is the right URL to access the image, if it were made public? Consider that S3 is used as a static webiste. Please select: A. http://examplebucket.s3-website-us-east-1.amazonaws.com/photo.jpg B. http://examplebucket.website-us-east-1.amazonaws.com/photo.jpg C. http://examplebucket.s3-us-east-1.amazonaws.com/photo.jpg D. http://examplebucket.amazonaws.s3-website-us-east-1/photo.jpg

Answer: A the URL for a S3 website is as shown below: <bucket-name>.s3-website-<AWS-region>.amazonaws.com

What best describes Recovery Time Objective (RTO)? Choose the correct answer from the options below Please select : A. The time it takes after a disruption to restore operations back to its regular service level. B. Minimal version of your production environment running on AWS. C. A full clone of your production environment. D. Acceptable amount of data loss measured in time.

Answer: A The recovery time objective (RTO) is the targeted duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity

Which of the below mentioned services are the building blocks for creating a basic high availability architecture in AWS. Select 2 options. Please select: A. EC2 B. SQS C. Elastic Load Balancer D. CoudWatch

Answer: A & C (EC2 & ELB) Having EC2 instances hosting your applications in multiple subnets, hence multiple AZ's and placing them behind an ELB is the basic building block of a high availability architecture in AWS.

Which of the following programming languages have an officially supported AWS SDK? Select 2. Please select: A. PHP B. Pascal C. Java D. SQL E. Perl

Answer: A & C (PHP, Java) This is as per the AWS Documentation: * Java * .NET * Node.js * PHP * Python * Ruby * Go * C++ * AWS Mobile SDK

Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers Please select : A. Supported on all Amazon EBS volume types B. Snapshots are automatically encrypted C. Available to all instance types D. Existing volumes can be encrypted E. shared volumes can be encrypted

Answer: A and B Option C is wrong because this there are some instance types that need to IOPS storage and not EBS storage. Option D is wrong because existing volumes cannot be encrypted. Option E is wrong because Shared volumes cannot be encrypted.

Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose two answers from the options given below Please select : A. Supported on all Amazon EBS volume types B. Snapshots are automatically encrypted C. Available to all instance types D. Existing volumes can be encrypted E. Shared volumes can be encrypted

Answer: A and B The AWS Documentation mentions the following on EBS Volumes and is available for all volume types You can create EBS General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes up to 16 TiB in size The snapshots of encrypted EBS Volumes are automatically encrypted, this is given in the AWS documentation

Your company has moved a legacy application from an on-premise data center to the cloud. The legacy application requires a static IP address hard-coded into the backend, which prevents you from deploying the application with high availability and fault tolerance using the ELB. Which steps would you take to apply high availability and fault tolerance to this application? Select 2 options. Please select : A. Write a custom script that pings the health of the instance, and, if the instance stops responding, switches the elastic IP address to a standby instance B. Ensure that the instance it's using has an elastic IP address assigned to it C. Do not migrate the application to the cloud until it can be converted to work with the ELB and Auto Scaling D. Create an AMI of the instance and launch it using Auto Scaling which will deploy the instance again if it becomes unhealthy

Answer: A and B The best option is to configure an Elastic IP that can be switched between a primary and failover instance.

As an AWS administrator you are trying to convince a team to use RDS Read Replica's. What are two benefits of using read replicas? Choose the 2 correct answers from the options below Please select : A. Creates elasticity in RDS B. Allows both reads and writes C. Improves performance of the primary database by taking workload from it D. Automatic failover in the case of Availability Zone service failures

Answer: A and C By creating a read replica RDS, you have the facility to scale out the reads for your application, hence increasing the elasticity for your application. Also it can be used to reduce the load on the main database. Read Replica's don't provide write operations , hence option B is wrong. And Multi-AZ is used for failover so Option D is wrong.

You currently have an EC2 instance hosting a web application. The number of users is expected to increase in the coming months and hence you need to add more elasticity to your setup. Which of the following methods can help add elasticity to your existing setup. Choose 2 answers from the options given below: Please select: A. Setup your web app on more EC2 instances and set them behind an Elastic Load Balancer B. Setup an Elastic Cache in front of the EC2 instance C. Setup your web app on more EC2 instances and use Route53 to route requests accordingly D. Setup DynamoDB behind your EC2 instances

Answer: A and C The Elastic Load Balancer is one of the most ideal solutions for adding elasticity to your application. The other alternative is to create a routing policy in Route53 with Weighted routing policy. Weighted resource record sets let you associate multiple resources with a single DNS name. Weighted routing policy enables Route53 to route traffic to different resources in specified proportions (weights). To create a group of weighted resource record sets, two or more resource record sets can be created that have the same combination of DNS name and type, and each resource record set is assigned a unique identifier and a relative weight. Option B is not valid because this will just cache the reads and will not add that desired elasticity to your application Option D is not valid because there is no mention of a persistence layer in the question that would require the use of DynamoDB

Which of the following statements about S3 are true. Please choose 2 options Please select : A. The total volume of data and number of objects you can store are unlimited B. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 1 terabytes C. You can use Multi-Object Delete to delete large numbers of objects from Amazon S3 D. You can store only objects of a particular format in S3

Answer: A and C The total volume of data and number of objects you can store are unlimited. Individual S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5GB. For objects larger than 100MB, customers should consider using Multipart Upload capability You can use Multi-Object Delete to delete large numbers of objects from S3. This feature allows you to send multiple object keys in a single request to speed up your deletes. There is no charge for using Multi-Object Delete. Option B is incorrect, because as per the S3 definition, the maximum size of objects can be 5 TB Option D is incorrect because you can virtually store objects of any type

What are the main benefits of AWS regions? Select 2 options. Please select: A. Regions allow you to design applications to conform to specific laws and regulations for specific parts of the world. B. All regions offer the same service at the same prices C. Regions allow you to choose a location in any country in the world. D. Regions allow you to place AWS resources in the area of the world closest to your customers who access those resources.

Answer: A and D AWS developer data centers across the world to help develop solutions that are close to the customer as possible. They also have center in core countries to help tie with the specific laws and regulations for specific parts of the world. AWS does not have centers in every location of the world, hence option C is invalid Services and prices are specific to every region, hence option B is invalid

Which of the following services natively encrypts data at rest within an AWS region? (Choose two) Please select: A. AWS Storage Gateway B. Amazon DynamoDB C. Amazon CloudFront D. Amazon Glacier E. Amazon Simple Queue Service

Answer: A and D This is clearly given in the AWS Documentation: Q: Is my data encrypted? Yes, all data in the service will be encrypted on the server side. Amazon Glacier handles key management and key protection for you, Amazon Glacier uses on of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256). 256-bit is the largest key size defined for AES. Q: What sort of encryption does file gateway use to protect my data? All data transferred between the gateway and AWS storage is encrypted using SSL. By default, all data stored in S3 encrypted server side with Amazon S3-Managed Encryption Keys (SSE-S3). For each file share you can optionally configure to hav your objects encrypted with AWS KMS-Managed Keys using SSE-KMS

A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? Choose 2 answers Please select : A. AWS Directory Service AD Connector B. AWS Directory Service Simple AD C. AWS Identity and Access Management groups D. AWS identity and Access Management roles E. AWS identity and Access Management users

Answer: A and D To enable trust relationship between AWS AD and Directory Service you need to create a New Role. After that, you need to assign Active Directory users or groups to those IAM roles. If roles are existing then you can assign Active Directory users or groups to existing IAM roles. AWS Directory Service provides multiple ways to use Microsoft Active Directory with other AWS services. You can choose the directory service with the features you need at a cost that fits your budget. Use Simple AD if you need an inexpensive Active Directory-compatible service with the common directory features. Select AWS Directory Service for Microsoft Active Directory (Enterprise Edition) for a feature-rich managed Microsoft Active Directory hosted on the AWS cloud. The third option, AD Connector, lets you simply connect your existing on-premises Active Directory to AWS.

What are the 2 main components of AutoScaling? Select 2 options. Please Select: A. Launch Configuration B. CloudTrail C. CloudWatch D. AutoScaling Groups

Answer: A and D (Launch Configuration & AutoScaling Groups)

You are using an m1.small EC2 Instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Chose 2 answers Please select: A. Use an array of EBS volumes B. Enable Multi-AZ mode C. Place the instance in an Auto Scaling Group D. Add an EBS volume and place into RAID 5 E. Increase the size of the EC2 instance F. Put the database behind an ELB

Answer: A and E With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for you instance. This is because all RAID is accomplished at the software level. For greater I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy; RAID 1 can mirror two volumes together. And then to offset the use of higher computer capacity, it is better to use a better instance type

You are planning to use the MySQL RDS in AWS. You have a requirement to ensure that you are available to recover from a database crash. Which of the below is not a recommended practice when you want to fulfill this requirement. Please select: A. Ensure that automated backups are enabled for the RDS B. Ensure that you use the MyISAM storage engine for MySQL C. Ensure that the database does not grow too large D. Ensure that file sizes for the RDS is well under 6 TB

Answer: B Below is the best recommended practices for MySQL: On a MySQL DB instance, avoid tables in your database growing too large. Provisioned storage limits restrict the maximum size of a MySQL table file to 16TB. Instead, partition your large tables so that file sizes are well under the 16 TB limit. This approach can also improve performance and recovery time. The Point-in-Time Restore and snapshot restore features of RDS for MySQL require a crash-recoverable storage engine and are supported for the InnoDB storage engine only. Although MySQL supports multiple storage engines with varying capabilities, not all of the are optimized for crash recovery and data durability. For example, the MyISAM storage engine does not support reliable crash recovery and might prevent a Point-In-Time Restore or snapshot restore from working as intended. This might result in lost or corrupt data when MySQL is restarted after a crash.

A Company provides an online service that utilizes SQS to decouple system components for scalability. The SQS consumer's EC2 instances poll the queue as often as possible to keep end-to-end throughput as high as possible. However, it is noticed that polling in tight loops is burning CPU cycles and increasing costs with empty responses. What can be done to reduce the number of empty responses? Choose the correct option. Please select : A. Scale the component making the request using Auto Scaling based off the number of messages in the queue B. Enable long polling by setting the ReceiveMessageWaitTimeSeconds to a number > 0 C. Enable short polling on the SQS queue by setting the ReceiveMessageWaitTimeSeconds to a number > 0 D. Enable short polling on the SQS message by setting the ReceiveMessageWaitTimeSeconds to a number = 0

Answer: B By default an SQS queue is configured with Shortpolling , which means that the queue is polled every so often for new messages. There is an option of long polling which allows for a shorter poll time but taking in more messages during the long polling cycle. In order to reduce the number of polling cycles , it better to have bigger gaps by enabling long polling. And this can be done by setting the ReceiveMessageWaitTimeSeconds attribute of the queue to a value greater than 0.

A customer needs to capture all client connection information from their load balancer every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements? Please select : A. Enable AWS CloudTrail for the load balancer. B. Enable access logs on the load balancer. C. Install the Amazon CloudWatch Logs agent on the load balancer. D. Enable Amazon CloudWatch metrics on the load balancer.

Answer: B Elastic Load Balancing provides access logs that capture detailed information about requests or connections sent to your load balancer. Each log contains information such as the time it was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Perform the following steps to enable load balancing: Step 1) Go to the Description tab for your load balancer Step 2) Go to the Attributes section and click on Edit Attributes Step 3) In the next screen, enable Access logging and choose the S3 bucket where the logs need to be added to.

An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance's security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance? Please select : A. The outbound security group needs to be modified to allow outbound traffic. B. The outbound network ACL needs to be modified to allow outbound traffic. C. Nothing, it can be accessed from any IP address using SSH. D. Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.

Answer: B For an EC2 instance to allow SSH, you can have the below configuration for the Security and Network ACL for Inbound and Outbound Traffic. Security Group - SSH * Inbound: Allow * Outbound: Deny Network ACL -SSH * Inbound: Allow * Outbound: Allow The reason why Network ACL has to have both an Allow for Inbound and Outbound is because network ACL's are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups, responses are stateful. So if an incoming request is granted, by default and outgoing request will also be granted.

An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an EC2 instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume? Please select: A. Upload your customer keys to AWS CloudHSM. Associate the EBS volume with CloudHSM . Remount the EBS volume B. Create and mount a new, encrypted EBS volume. Move the data to the new volume. Delete the old EBS volume. C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the EBS volume D. Snapshot the current EBS volume. Restore the snapshot to a new, encrypted EBS volume. Mount the EBS volume

Answer: B Here the only option available is to create a new mount volume Option A is wrong because you cannot encrypt a volume once it is created. You would need to use some local encrypting algorithm if you want to encrypt the data on the volume Option C is wrong because even if you unmount the volume, you cannot encrypt the volume. Encryption has to be done during volume creation Option D is wrong because even if the volume is not encrypted, the snapshot will also not be encrypted You cannot create an encrypted snapshot of an unencrypted volume or change existing volume from unencrypted to encrypted. You have to create new encrypted volume and transfer data to the new volume. The other option is to encrypt a volume's data by means of snapshot copying: 1. Create a snaptshot of your unencrypted EBS volume. This snapshot is also unencrypted 2. Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted. 3. Restore the encrypted snapshot to a new volume, which is also encrypted. But that option is not listed

There is a requirement by a company that does online credit card processing to have a secure application environment on AWS. They are trying to decide on whether to use KMS or CloudHSM. Which of the following statements is right when it comes to CloudHSM and KMS. Choose the correct answer from the options given below A. It probably doesn't matter as they both do the same thing B. AWS CloudHSM does not support the processing, storage, and transmission of credit card data by a merchant or service provider, as it has not been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS); hence, you will need to use KMS C. KMS is probably adequate unless additional protection is necessary for some applications and data that are subject to strict contractual or regulatory requirements for managing cryptographic keys, then HSM should be used D. AWS CloudHSM should be always be used for any pay

Answer: C AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys. This is sufficient if you the basic needs of managing keys for security. For a higher requirement on security one can use CloudHSM. The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM

A company is deploying a new two-tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company's requirements? Please select : A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone B. Amazon RDS for MySQL with Multi-AZ C. Amazon ElastiCache D. Amazon DynamoDB

Answer: C Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases Option A is wrong because even if MySQL is installed on multiple systems, it will not help to serve the most recently used data. Option B is wrong because even a Multi-AZ option with an RDS will not suffice the requirement of the customer. Option D is wrong because this is a pure database option.

You are a solutions architect working for a large digital media company. Your company is migrating their production estate to AWS and you are in the process of setting up access to the AWS console using IAM. You have created 5 users for your system administrators. What further steps do you need to take to enable your system administrators to get access to the AWS console? Please select: A. Generate an Access Key ID & Secret Access Key, and give these to your system admins B. Enable multi-factor authentication on their accounts and define a password policy C. Generate a password for each user created and give these passwords to your system admins D. Give the system admins the secret access key and access key id, and tell them to use these credentials to log in to the AWS console

Answer: C In order to allow the users to log into the console, you need to provide a password for the users

Perfect Forward Secrecy is used to offer SSL/TLS cipher suites for which two AWS services? Choose the correct answer from the options below Please select : A. EC2 and S3 B. CloudTrail and CloudWatch C. Cloudfront and Elastic Load Balancing D. Trusted advisor and GovCloud

Answer: C Its currently available for Cloudfront and ELB. Please find the below link for more details

You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make? Please select : A. Deploy 6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer. B. Deploy 3 EC2 instances in one region and 3 in another region and use Amazon Elastic Load Balancer. C. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer. D. Deplo

Answer: C Option A is automatically incorrect because remember that the question asks for high availability. For option A, if the AZ goes down then the entire application fails. For Option B and D, the ELB is designed to only run in one region in aws and not across multiple regions. So these options are wrong. The right option is C. The below example shows an Elastic Loadbalancer connected to 2 EC2 instances connected via Auto Scaling. This is an example of an elastic and scalable web tier. By scalable we mean that the Auto scaling process will increase or decrease the number of EC2 instances as required.

Which of the following is incorrect with regards to Private IP addresses? Please select: A. In Amazon EC2 classic, the private IP addresses are only returned to Amazon EC2 when the instance is stopped or terminated B. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped C. In Amazon VPC, an instance does NOT retain its private IP addresses when the instance is stopped D. In Amazon EC2 classic, the private IP address is associated exclusively with the instance for its lifetime

Answer: C The following is true with regards to Private IP addressing: For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated For instances launched in EC2-Classic, we release the private IPv4 address when the instance is stopped or terminated. If you restart your stopped instance, it receives a new private IPv4 address

Which of the following can constitute the term of a "Golden Image"? Please select: A. This is the basic AMI which is available in AWS B. This refers to an instance which has been bootstrapped C. This refers to an AMI that has been constructed from a customized image D. This refers to a special type of Linux AMI

Answer: C You can customize an EC2 instance and then save its configuration by creating an AMI. You can launch as many instances from the AMI as you need, and they will all include those customizations that you've made. Each time you want to change your configuration you will need to create a new golden image, so you will need to have a versioning convention to manage your golden images over time.

How long can messages live in a SQS queue? Please select: A. 12 hours B. 10 days C. 14 days D. 1 year

Answer: C. 14 days


Conjuntos de estudio relacionados

US History The Constitution (Notes #1)

View Set

CET 215- Lesson 13 Network protocols (Quiz)

View Set