AWS-SAA-2

Ace your homework & exams now with Quizwiz!

D access keys, amiright

After creating a new IAM user which of the following must be done before they can successfully make API calls? A. Add a password to the user. B. Enable Multi-Factor Authentication for the user. C. Assign a Password Policy to the user. D. Create a set of Access Keys for the user.

Answer: B. infrequently accessed data. C. data archives. Think "cold storage" and the name Glacier makes a bit more sense. AWS includes a number of storage solutions and as per the to pass the exam, you are expected to know the appropriate use of all of them. I picture them on the following scale: Instance (aka ephemeral, aka local) storage is a device like a RAM disk physically attached to your server (your EC2 instance) and characteristically it gets completely wiped every reboot. Naturally this makes it suitable for temporary storage, but nothing that needs to survive something as simple as a reboot. You can store the Operating System on there if nothing important gets stored there after the instance is started (and bootstrapping completes). Micro-sized instance types (low specification servers) don't have ephemeral storage. Some larger more expensive instance types come with SSD instance storage for higher performance. Elastic Block Store (EBS) is a service where you buy devices more akin to a hard disk that can be attached to one (and only one -at the time of writing) EC2 instance. They can be set to persist after an instance is restarted. They can be easily "snapshotted", i.e. backed up in away that you can create a new identical device and attach that to the same or another EC2 instance. One other thing to know about EBS is that you can pay extra money for what is known as provisioned IOPS which means guaranteed (and very high if you like) disk read and write speeds. S3 is a cloud file storage service more akin to DropBox or GoogleDrive. It is possible to attach a storage volume created and stored in S3 to an EC2 instance, but this is no longer recommended (EBS is preferable). S3 is instead for storing things like your EC2 server images (Amazon Machine Images aka AMIs), static content e.g. for a web site, input or output data files (like you've use an SFTP site), or anything that you'd treat like a file. An S3 store is called a bucket whilst living in one specified global region, has a globally unique name. S3 integrates extremely will with the CloudFront content distribution service which offers caching of content to a much more globally distributed set of edge locations (thus improving performance and saving bandwidth costs). Glacier comes next as basically a variant on S3 where you expect to want to view the files either hardly ever or never again. For example old backups, old data only kept for compliance purposes. Instead of a bucket, Glacier files are stored in a Vault. Instead of getting instant access to files, you have to make a retrieval request and wait a number of hours. S3 and Glacier play very nicely together because you can set up Lifecycles for S3 objects which cause them to be moved to Glacier after a certain trigger e.g. a certain elapsed "expiry" time passing.

Amazon Glacier is designed for: (Choose 2 answers) A. active database storage. B. infrequently accessed data. C. data archives. D. frequently accessed data. E. cached session data.

C & D Explanation: Auto Scaling determines whether there are instances in multiple Availability Zones. If so, it selects the Availability Zone with the most instances and at least one instance that is not protected from scale in. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingBehavior.Instanc eTermination.html

An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will: Choose 2 answers A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance. B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected. C. Send an SNS notification, if configured to do so. D. Terminate an instance in the AZ which currently has 2 running EC2 instances. E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ.

B. After you no longer need an Amazon EBS volume, you can delete it. After deletion, its data is gone and the volume can't be attached to any instance. However, before deletion, you can store a snapshot of the volume, which you can use to re-create the volume later. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-volume.html

Before I delete an EBS volume, what can I do if I want to recreate the volume later? A. Create a copy of the EBS volume (not a snapshot) B. Store a snapshot of the volume C. Download the content to an EC2 instance D. Back up the data in to a physical disk

B The requirements state "Users will log into the game using their existing social media account to streamline data capture." This is what Cognito is used for, ie Web Identity Federation. Amazon also recommend to "build your app so that it requests temporary AWS security credentials dynamically when needed using web identity federation."

Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDS table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. What is the best approach for storing data to DynamoDB and S3? A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services. B. Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation. C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket. D. Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.

A. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

If I want an instance to have a public IP address, which IP address should I use? A. Elastic IP Address B. Class B IP Address C. Class A IP Address D. Dynamic IP Address

C. No permissions

Every user you create in the IAM system starts with _________. A. Partial permissions B. Full permissions C. No permissions

C. http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html

Fill in the blanks: Resources that are created in AWS are identified by a unique identifier called an __________ A. Amazon Resource Number B. Amazon Resource Nametag C. Amazon Resource Name D. Amazon Reesource Namespace

D Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gigabits per second (Gbps) network. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

In order to optimize performance for a compute cluster that requires low inter-node latency, which of the following feature should you use? A. Multiple Availability Zones B. AWS Direct Connect C. EC2 Dedicated Instances D. Placement Groups E. VPC private subnets

A bootstrap scripts = user data

Q50 You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this? A. User data B. EC2Config service C. IAM roles D. AWS Config

B. The root device is typically /dev/sda1 (Linux) or xvda (Windows). http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

Select the most correct answer: The device name /dev/sda1 (within Amazon EC2) is _____ A. Possible for EBS volumes B. Reserved for the root device C. Recommended for EBS volumes D. Recommended for instance store volumes

D. https://aws.amazon.com/documentation/s3/

What does Amazon S3 stand for? A. Simple Storage Solution. B. Storage Storage Storage (triple redundancy Storage). C. Storage Server Solution. D. Simple Storage Service.

B. https://aws.amazon.com/swf/

What does Amazon SWF stand for? A. Simple Web Flow B. Simple Work Flow C. Simple Wireless Forms D. Simple Web Form

A & C http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosingalias-non-alias.html

Which of the following statements are true about Amazon Route 53 resource records? Choose 2 answers A. An Alias record can map one DNS name to another Amazon Route 53 DNS name. B. A CNAME record can be created for your zone apex. C. An Amazon Route 53 CNAME record can point to any DNS record hosted anywhere. D. TTL can be set for an Alias record in Amazon Route 53. E. An Amazon Route 53 Alias record can point to any DNS record hosted anywhere.

D. Your AWS account automatically has a default security group per VPC and per region for EC2-Classic. If you don't specify a security group when you launch an instance, the instance is automatically associated with the default security group. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

You must assign each server to at least _____ security group A. 3 B. 2 C. 4 D. 1

Answer: A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. An Origin Access Identity is a special user that you will set up the CloudFront service to use to access you restricted content, see here.

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 Directly? A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. B. Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy. C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User. D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

C

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable? A. Multiple Amazon EBS volume with snapshots B. A single Amazon Glacier vault C. A single Amazon S3 bucket D. Multiple instance stores

D Dynamo b/c durable data

You are configuring your company's application to use Auto Scaling and need to move user state information. Which of the following AWS services provides a shared data store with durability and low latency? A. AWS ElastiCache Memcached B. Amazon Simple Storage Service C. Amazon EC2 instance storage D. Amazon DynamoDB

A

You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion? A. Amazon Kinesis B. AWS Data Pipeline C. Amazon AppStream D. Amazon Simple Queue Service

B LEXXICAL Explanation: If you anticipate that your workload will consistently exceed 100 requests per second, you should avoid sequential key names. If you must use sequential numbers or date and time patterns in key names, add a random prefix to the key name. The randomness of the prefix more evenly distributes key names across multiple index partitions. Examples of introducing randomness are provided later in this topic.

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance? A. Use multi-part upload. B. Add a random prefix to the key names. C. Amazon S3 will automatically manage performance at this scale. D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

B https://aws.amazon.com/blogs/aws/fine-grained-access-control-for-amazon-dynamodb/ Here are some of the things that you can build using fine-grained access control: A mobile app that displays information for nearby airports, based on the user's location. The app can access and display attributes such airline names, arrival times, and flight numbers. However, it cannot access or display pilot names or passenger counts. A mobile game which stores high scores for all users in a single table. Each user can update their own scores, but has no access to the other ones.

You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements? A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access. C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials. D. Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.

C. Create role which have admin permission in the Dev and Test account, and grant that role for the Master account. Then, users in the Master account that have "AssumeRole" permission can switch to the role created in Dev and Test. http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html

You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal. A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access. D. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts

A TODO: WHY? https://d0.awsstatic.com/whitepapers/Storage/AWS%20Storage%20Services%20Whitepaperv9.pdf Amazon Glacier doesn't suit all storage situations. Listed following are a few storage needs for which you should consider other AWS storage options instead of Amazon Glacier. Data that must be updated very frequently might be better served by a storage solution with lower read/write latencies, such as Amazon EBS, Amazon RDS, Amazon DynamoDB, or relational databases running on EC2.

A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly to off-site tapes Which AWS storage and database architecture meets the requirements of the application? A. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time App servers snare state using a combination or DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB snapshots. B. Web servers store -read-only data in S3, and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database, use RDS with multi-AZ deployment and one or more read replicas Backup web servers app servers, and database backed up weekly to Glacier using snapshots. C. Web servers store read-only data In S3 and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment Backup web and app servers backed up weekly via AM is. Database backed up via DB snapshots D. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time App servers share state using a combination of DynamoDB and IP multicast Database use RDS with multi-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB snapshots

C Explanation: Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location from which DNS queries originate. For example, you might want all queries from Africa to be routed to a web server with an IP address of 192.0.2.111. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint. http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routingpolicy-weighted

A US-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents? A. Use a public-facing load balancer per region to load-balance web traffic, and enable HTTP health checks. B. Use a public-facing load balancer per region to load-balance web traffic, and enable sticky sessions. C. Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions. D. Use Amazon Route 53, and apply a weighted routing policy to distribute traffic across both regions.

A Explanation: http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-pg.pdf#create-vpcpeering-connection

A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the following options helps the company accomplish this? A. Create a new peering connection Between Prod and Dev along with appropriate routes. B. Create a new entry to Prod in the Dev route table using the peering connection as the target. C. Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target. D. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.

B & E

A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? Choose 2 answers A. Establish a hardware VPN over the internet between VPC-2 ana the on-premises network. B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2. D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

TODO: research answer

A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use? A. Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first can the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance. B. Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote. C. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table. D. Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login. With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.

B Explanation: Use roles for applications that run on Amazon EC2 instances Applications that run on an Amazon EC2 instance need credentials in order to access other AWS services. To provide credentials to the application in a secure way, use IAM roles. A role is an entity that has its own set of permissions, but that isn't a user or group. Roles also don't have their own permanent set of credentials the way IAM users do. In the case of Amazon EC2, IAM dynamically provides temporary credentials to the EC2 instance, and these credentials are automatically rotated for you. http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-roles-with-ec2

A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure mat AWS credentials (i.e., Access Key ID/Secret Access Key combination) are not compromised? A. Enable Multi-Factor Authentication for your AWS root account. B. Assign an IAM role to the Amazon EC2 instance. C. Store the AWS Access Key ID/Secret Access Key combination in software comments. D. Assign an IAM user to the Amazon EC2 Instance.

B Explanation: When is it appropriate to use DynamoDB instead of a relational database? From our own experience designing and operating a highly available, highly scalable ecommerce platform, we have come to realize that relational databases should only be used when an application really needs the complex query, table join and transaction capabilities of a full-blown relational database. In all other cases, when such relational features are not needed, a NoSQL database service like DynamoDB offers a simpler, more available, more scalable and ultimately a lower cost solution.

A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company's requirements? A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone B. Amazon RDS for MySQL with Multi-AZ C. Amazon ElastiCache D. Amazon DynamoDB

B

A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing lower Overall CPU resources for the web tier? A. Amazon EBS volume B. Amazon S3 C. Amazon EC2 instance store D. Amazon RDS instance

A & D Explanation: http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html

A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? (Select 2) Choose 2 answers A. AWS Directory Service AD Connector B. AWS Directory Service Simple AD C. AWS Identity and Access Management groups D. AWS identity and Access Management roles E. AWS identity and Access Management users

A & B & E

A company is storing data on Amazon Simple Storage Service (S3). The company's security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. B. Use Amazon S3 server-side encryption with customer-provided keys. C. Use Amazon S3 server-side encryption with EC2 key pair. D. Use Amazon S3 bucket policies to restrict access to the data at rest. E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key. F. Use SSL to encrypt the data while in transit to Amazon S3.

B global roles & D doesn't really get them what they want

A company needs to deploy services to an AWS region which they have not previously used. The company currently has an AWS identity and Access Management (IAM) role for the Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have the same privileges. How should the company achieve this? A. Create a new IAM role and associated policies within the new region B. Assign the existing IAM role to the Amazon EC2 instances in the new region C. Copy the IAM role and associated policies to the new region and attach it to the instances D. Create an Amazon Machine Image (AMI) of the instance and copy it to the desired region using the AMI Copy feature

B Explanation: To enable integration, you need to ensure that your domain is reachable via an Amazon Virtual Private Cloud VPC (this could mean that Active Directory domain controllers for your domain are running on Amazon EC2 instances, or that they are reachable via a VPN connection and are located in your on-premises network).

A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company's requirements? A. Virtual Private Network connection. AWS Directory Services, and ClassicLink B. Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces C. AWS Directory Service, Amazon Workspaces, and AWS Identity and Access Management D. Amazon Elastic Compute Cloud, and AWS Identity and Access Management

B & E Explanation: B: Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring system of your choice E: Use Amazon RDS DB events to monitor failovers. For example, you can be notified by text message or email when a DB instance fails over. Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs.

A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers A. Amazon Simple Email Service B. Amazon CloudWatch C. Amazon Simple Queue Service D. Amazon Route 53 E. Amazon Simple Notification Service

B https://aws.amazon.com/sqs/faqs/

A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your onpremises database resources in the most cost-effective way? A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS. B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database. C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database. D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.

A Explanation: http://docs.aws.amazon.com/storagegateway/latest/userguide/storage-gateway-cachedconcepts.html

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed dat a. Which AWS Storage Gateway configuration meets the customer requirements? A. Gateway-Cached volumes with snapshots scheduled to Amazon S3 B. Gateway-Stored volumes with snapshots scheduled to Amazon S3 C. Gateway-Virtual Tape Library with snapshots to Amazon S3 D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

D & E & F

A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? Choose 3 answers A. Use a HTTPS GET to the Amazon S3 bucket where the files are located. B. Restore by implementing a lifecycle policy on the Amazon S3 bucket. C. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours. D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot. E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance. F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot.

D

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? A. Create an A record pointing to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name. C. Create a CNAME record aliased to the load balancer DNS name. D. Create an A record aliased to the load balancer DNS name

D

A customer is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement? A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC. B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere. C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses. D. Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses.

B & D Explanation: http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html

A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose 2 answers A. Use AWS Consolidated Billing and disable AWS root account access for the child accounts. B. Enable IAM cross-account access for all corporate IT administrators in each child account. C. Create separate VPCs for each division within the corporate IT AWS account. D. Use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account. E. Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account's Amazon S3 'Log' bucket.

B Explanation: If its just for internal audit, then Server access logging, I assume is sufficient: http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html For external audits I would go for CloudTrail: http://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html

A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement? A. Enable AWS CloudTrail to audit all Amazon S3 bucket access. B. Enable server access logging for all required Amazon S3 buckets. C. Enable the Requester Pays option to track access via AWS Billing D. Enable Amazon S3 event notifications for Put and Post.

Awnser C - Use SQS to decouple - Use AWS Mobile Push for msg https://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

A large real-estate brokerage is exploring the option o( adding a cost-effective location based alert to their existing mobile application The application backend infrastructure currently runs on AWS Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the us Which one of the following architectural suggestions would you make to the customer? A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application. B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications ' location through carrier connection: ROS will be used to store and relevant relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application C. The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application D. The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.

D Explanation: Web identity federation - You can let users sign in using a well-known third party identity provider such as Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC) 2.0 compatible provider. AWS STS web identity federation supports Login with Amazon, Facebook, Google, and any OpenID Connect (OICD)-compatible identity provider.

A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations? A. SAML-based Identity Federation B. Cross-Account Access C. AWS Identity and Access Management roles D. Web Identity Federation

C

A t2.medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)? A. An Instance store Hardware Virtual Machine AMI B. An Instance store Paravirtual AMI C. An Amazon EBS-backed Hardware Virtual Machine AMI D. An Amazon EBS-backed Paravirtual AMI

D. TODO: look into this question A - promiscuous mode is not allowed B - doesn't make sense because you can not route to the internet over VPC peered connections. You can only route into the peered VPC but not further. C - makes sense but there is no route command to route the traffic D - seems to be correct because: http://jayendrapatil.com/aws-intrusion-detection-prevention-idsips/

A web company is looking to implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC. How should they architect their solution to achieve these goals? A. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC. B. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides. C. Configure servers running in the VPC using the host-based 'route' commands to send all traffic through the platform to a scalable virtualized IDS/IPS. D. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.

B - One Stack - Containing two layers (one ELB and one EC2 layer) - One recipe since only the EC2 layer needs updates https://docs.aws.amazon.com/opsworks/latest/userguide/welcome_classic.html

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2 x large instances since it is highly memory-bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week. Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way? A. Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe B. Create one AWS Ops Works stack create two AWS Ops Works layers create one custom recipe C. Create two AWS Ops Works stacks create two AWS Ops Works layers create one custom recipe D. Create two AWS Ops Works stacks create two AWS Ops Works layers create two custom recipe

C. A private IPv4 address is an IP address that's not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same network (EC2-Classic or a VPC). A public IP address is an IPv4 address that's reachable from the Internet. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html

All Amazon EC2 instances are assigned two IP addresses at launch, out of which one can only be reached from within the Amazon EC2 network? A. Multiple IP address B. Public IP address C. Private IP address D. Elastic IP Address

B Explanation: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html To migrate data between encrypted and unencrypted volumes 1. Create your destination volume (encrypted or unencrypted, depending on your need) by following the procedures in Creating an Amazon EBS Volume. 2. Attach the destination volume to the instance that hosts the data to migrate. For more information, see Attaching an Amazon EBS Volume to an Instance. 3. Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use. For Linux instances, you can create a mount point at /mnt/destination and mount the destination volume there. 4. Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.

An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume? A. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSM. Re-mount the Amazon EBS volume. B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume. C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume. D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume

D Explanation: Using the legacy S3 based AMIs, either of the above terminates the instance and you lose all local and ephemeral storage (boot disk and /mnt) forever. Hope you remembered to save the important stuff elsewhere!

When an EC2 instance that is backed by an S3-based AMI is terminated, what happens to the data on the root volume? A. Data is automatically saved as an EBS snapshot. B. Data is automatically saved as an EBS volume. C. Data is unavailable until the instance is restarted. D. Data is automatically deleted.

A. ec2-run-instances ami_id [-n instance_count] [-k keypair] [-g group [-g group ...]] [-d user_data | -f filename] [-instance-type instance_type] [- availability-zone zone] [-placement-group group_name] [-tenancy tenancy] [-kernel kernel_id] [-ramdisk ramdisk_id] [-block-device-mapping mapping] [-monitor] [-subnet subnet_id] [-disable-api-termination] [-instanceinitiated-shutdown-behavior behavior] [-private-ip-address ip_address] [-client-token token] [-secondary-private-ip-address ip_address | — secondary-private-ip-address-count count] [-network-attachment attachment] [-iam-profile arn | name] [-ebs-optimized] [-associate-public-ip-address Boolean]

If I write the below command, what does it do? ec2-run ami-e3a5408a -n 20 -g appserver A. Start twenty instances as members of appserver group. B. Creates 20 rules in the security group named appserver C. Terminate twenty instances as members of appserver group. D. Start 20 security groups

Answer: D. hypervisor visible metrics such as CPU utilization Amazon needs to know this anyway to provide IaaS, so it seems natural that they share it.

In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics: A. web server visible metrics such as number failed transaction requests B. operating system visible metrics such as memory utilization C. database visible metrics such as number of connections D. hypervisor visible metrics such as CPU utilization

C & E E is super right, C is right too

Which of the following are valid statements about Amazon S3? Choose 2 answers A. S3 provides read-after-write consistency for any type of PUT or DELETE. B. Consistency is not guaranteed for any type of PUT or DELETE. C. A successful response to a PUT request only occurs when a complete object is saved. D. Partially saved objects are immediately readable with a GET after an overwrite PUT. E. S3 provides eventual consistency for overwrite PUTS and DELETES.

Answer: A. enable S3 versioning on the bucket As the name suggests, S3 versioning means that all versions of a file are kept and retrievable at a later date (by making a request to the bucket, using the object ID and also the version number). The only charge for having this enabled is from the fact that you will incur more storage. When an object is deleted, it will still be accessible just not visible.

To protect S3 data from both accidental deletion and accidental overwriting, you should: A. enable S3 versioning on the bucket B. access S3 data using only signed URLs C. disable S3 delete using an IAM bucket policy D. enable S3 Reduced Redundancy Storage E. enable Multi-Factor Authentication (MFA) protected access

D. https://aws.amazon.com/s3/reduced-redundancy/

What does RRS stand for when talking about S3? A. Redundancy Removal System B. Relational Rights Storage C. Regional Rights Standard D. Reduced Redundancy Storage

D Just prevents

What does specifying the mapping /dev/sdc=none when launching an instance do? A. Prevents /dev/sdc from creating the instance. B. Prevents /dev/sdc from deleting the instance. C. Set the value of /dev/sdc to 'zero'. D. Prevents /dev/sdc from attaching to the instance.

A. In order to reduce storage costs, you can use reduced redundancy storage for noncritical, reproducible data at lower levels of redundancy than Amazon S3 provides with standard storage. The lower level of redundancy results in less durability and availability, but in many cases, the lower costs can make reduced redundancy storage an acceptable storage solution. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRRS.html

What is the Reduced Redundancy option in Amazon S3? A. Less redundancy for a lower cost. B. It doesn't exist in Amazon S3, but in Amazon EBS. C. It allows you to destroy any copy of your files outside a specific jurisdiction. D. It doesn't exist at all

A 2 not 11 9s

What is the durability of S3 RRS? A. 99.99% B. 99.95% C. 99.995% D. 99.999999999%

C

What is the minimum time Interval for the data that Amazon CloudWatch receives and aggregates? A. One second B. Five seconds C. One minute D. Three minutes E. Five minutes

A & C Explanation: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html

Which of the following instance types are available as Amazon EBS-backed only? Choose 2 answers A. General purpose T2 B. General purpose M3 C. Compute-optimized C4 D. Compute-optimized C3 E. Storage-optimized 12

C Explanation: You are allowed one EIP to be attached to a running instance at no charge. otherwise, it will incur a small fee. in this case, the instance is stopped, and thus, the EIP will be billed at the normal rate. http://aws.amazon.com/ec2/pricing/

When will you incur costs with an Elastic IP address (EIP)? A. When an EIP is allocated. B. When it is allocated and associated with a running instance. C. When it is allocated and associated with a stopped instance. D. Costs are incurred regardless of whether the EIP is associated with a running instance.

D. Primary https://aws.amazon.com/rds/faqs/ Q: What do "primary" and "standby" mean in the context of a Multi-AZ deployment? When you run a DB instance as a Multi-AZ deployment, the "primary" serves database writes and reads. In addition, Amazon RDS provisions and maintains a "standby" behind the scenes, which is an up-to-date replica of the primary. The standby is "promoted" in failover scenarios. After failover, the standby becomes the primary and accepts your database operations. You do not interact directly with the standby (e.g. for read operations) at any point prior to promotion. More about concept: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

When you run a DB Instance as a Multi-AZ deployment, the "_____" serves database writes and reads A. secondary B. backup C. stand by D. primary

C Explanation: Although you can only access instance metadata and user data from within the instance itself, the data is not protected by cryptographic methods http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instancemetadata.html#instancedata-data-retrieval

Which Amazon Elastic Compute Cloud feature can you query from within the instance to access instance properties? A. Instance user data B. Resource tags C. Instance metadata D. Amazon Machine Image

A & C S3 is secure, these restrict access from additional layers

Which features can be used to restrict access to data in S3? Choose 2 answers A. Set an S3 ACL on the bucket or the object. B. Create a CloudFront distribution for the bucket. C. Set an S3 bucket policy. D. Enable IAM Identity Federation E. Use S3 Virtual Hosting

Answer: B. Decommissioning of storage devices using industry-standard practices Clearly there is no way you could do this, so AWS take care.

Which is an operational process performed by AWS for data security? A. AES-256 encryption of data stored on any shared storage device B. Decommissioning of storage devices using industry-standard practices C. Background virus scans of EBS volumes and EBS snapshots D. Replication of data across multiple AWS Regions E. Secure wiping of EBS data when an EBS volume is unmounted

C Explanation: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-snapshot.html

Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data? A. Maintain two snapshots: the original snapshot and the latest incremental snapshot. B. Maintain a volume snapshot; subsequent snapshots will overwrite one another C. Maintain a single snapshot the latest snapshot is both Incremental and complete. D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.

A & C & E Explanation: You can use Auto Scaling or other AWS services to launch the On-Demand instances that use your Reserved Instance benefits. For information about launching On-Demand instances, see Launch Your Instance. For information about launching instances using Auto Scaling, see the Auto Scaling User Guide. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts-on-demand-reservedinstances.html https://forums.aws.amazon.com/thread.jspa?threadID=56501

Which of the following are characteristics of a reserved instance? Choose 3 answers A. It can be migrated across Availability Zones B. It is specific to an Amazon Machine Image (AMI) C. It can be applied to instances launched by Auto Scaling D. It is specific to an instance Type E. It can be used to lower Total Cost of Ownership (TCO) of a system

A & C & E Explanation: A:have a trail with the Apply trail to all regions option enabled. C:have multiple single region trails. E: Log files from all the regions can be delivered to a single S3 bucket Global service events are always delivered to trails that have the Apply trail to all regions option enabled. Events are delivered from a single region to the bucket for the trail. This setting cannot be changed. If you have a single region trail, you should enable the Include global services option. If you have multiple single region trails, you should enable the Include global services option in only one of the trails. D Incorrect: once enabled it is applicable for all the supported services, service can't be selected

Which of the following are true regarding AWS CloudTrail? Choose 3 answers A. CloudTrail is enabled globally B. CloudTrail is enabled by default C. CloudTrail is enabled on a per-region basis D. CloudTrail is enabled on a per-service basis. E. Logs can be delivered to a single Amazon S3 bucket for aggregation. F. CloudTrail is enabled for all available services within a region. G. Logs can only be processed and delivered to the region in which they are generated.

A & B

Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers A. Supported on all Amazon EBS volume types B. Snapshots are automatically encrypted C. Available to all instance types D. Existing volumes can be encrypted E. shared volumes can be encrypted

B & C & D Ideal Usage Patterns Amazon DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies, and the ability to scale storage and throughput up or down as needed without code changes or downtime. Use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization's business. for e.g. mobile apps, gaming, digital ad serving, live voting and audience interaction for live events, sensor networks, log ingestion, access control for web-based content, metadata storage for Amazon S3 objects, e-commerce shopping carts, and web session management

Which of the following are use cases for Amazon DynamoDB? Choose 3 answers A. Storing BLOB data. B. Managing web sessions. C. Storing JSON documents. D. Storing metadata for Amazon S3 objects. E. Running relational joins and complex updates. F. Storing large amounts of infrequently accessed data.

A & D Explanation: SNS Supported Endpoints Email Notifications Amazon SNS provides the ability to send Email notifications SMS Notifications Amazon SNS provides the ability to send and receive Short Message Service (SMS) notifications to SMS-enabled mobile phones and smart phones http://docs.aws.amazon.com/sns/latest/dg/welcome.html

Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? Choose 2 answers A. Email B. CloudFront distribution C. File Transfer Protocol D. Short Message Service E. Simple Network Management Protocol

A memory utilization = RAM Explanation: CloudWatch relies on the information provided by this hypervisor, which can only see the most hardware-sided part of the instance's status, including CPU usage (but not load), total memory size (but not memory usage), number of I/O operations on the hard disks (but not it's partition layout and space usage) and network traffic (but not the processes generating it).

Which of the following requires a custom CloudWatch metric to monitor? A. Memory Utilization of an EC2 instance B. CPU Utilization of an EC2 instance C. Disk usage activity of an EC2 instance D. Data transfer of an EC2 instance

A & D Explanation: https://media.amazonwebservices.com/AWS_Securing_Data_at_Rest_with_Encryption.pdf (page 12)

Which of the following services natively encrypts data at rest within an AWS region? Choose 2 answers A. AWS Storage Gateway B. Amazon DynamoDB C. Amazon CloudFront D. Amazon Glacier E. Amazon Simple Queue Service

Answers: B. All data on instance-store devices will be lost (See storage explanations above) E. The underlying host for the instance is changed Not a great answer here. You are completely abstracted from underlying hosts. So you have no way of knowing this. But by elimination, I picked this.

Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers) A. The Elastic IP will be dissociated from the instance B. All data on instance-store devices will be lost C. All data on EBS (Elastic Block Store) devices will be lost D. The ENI (Elastic Network Interface) is detached E. The underlying host for the instance is changed

B Explanation: https://aws.amazon.com/cn/premiumsupport/knowledge-center/snapshot-ebs-raid-array/ To create an "application-consistent" snapshot of your RAID array, stop applications from writing to the RAID array, and flush all caches to disk. Then ensure that the associated EC2 instance is no longer writing to the RAID array by taking steps such as freezing the file system, unmounting the RAID array, or *shutting down the associated EC2 instance*. After completing the steps to halt all I/O, take a snapshot of each EBS volume. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance."

Which procedure for backing up a relational database on EC2 that is using a set of RAlDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup? A. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes C. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk I/O D. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk I/O E. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O

B Explanation: Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite In addition to that, they have made it a requirement that delete operations on versioned data can only be done using MFA (Multi factor authentication). http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

Which set of Amazon S3 features helps to prevent and recover from accidental data loss? A. Object lifecycle and service access logging B. Object versioning and Multi-factor authentication C. Access controls and server-side encryption D. Website hosting and Amazon S3 policies

D. Your DB instance will most likely be created in a VPC. Security groups provide access to the DB instance in the VPC. They act as a firewall for the associated DB instance, controlling both inbound and outbound traffic at the instance level. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SettingUp.html#CHAP_SettingUp.SecurityGroup

While creating an Amazon RDS DB, your first task is to set up a DB ______ that controls what IP addresses or EC2 instances have access to your DB Instance. A. Security Pool B. Secure Zone C. Security Token Pool D. Security Group

C Best practice is to generate report from RDS read replica.

You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database. A. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests. B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ. C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica. D. Generate the reports by querying the ElastiCache database caching tier.

A ssh = tcp

You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement? A. Security Group Inbound Rule: Protocol -TCP. Port Range -22, Source 72.34.51.100/32 B. Security Group Inbound Rule: Protocol -UDP, Port Range -22, Source 72.34.51.100/32 C. Network ACL Inbound Rule: Protocol -UDP, Port Range -22, Source 72.34.51.100/32 D. Network ACL Inbound Rule: Protocol -TCP, Port Range-22, Source 72.34.51.100/0

D Explanation: http://aws.amazon.com/opsworks/

You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS? A. Amazon Simple Workflow Service B. AWS Elastic Beanstalk C. AWS CloudFormation D. AWS OpsWorks

A TODO: look into this question http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html

You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard'? A. Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job. B. Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job D. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job. E. Use Elastic Beanstalk 'Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.

C Explanation: You can create an ASG from instance ID http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateAutoScalingGroup. html

You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance? A. Create a load balancer, and register the Amazon EC2 instance with it B. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin C. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action D. Create a launch configuration from the instance using the CreateLaunchConfiguration action

A spot. it can fail gracefully

You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most costeffective way. Which of the following will meet your requirements? A. Spot Instances B. Reserved instances C. Dedicated instances D. On-Demand instances

A & C VGW isn't a thing

You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer's DNS name. Which options are probable causes of this behavior? Choose 2 answers A. The load balancer was not configured to use a public subnet with an Internet gateway configured B. The Amazon EC2 instances do not have a dynamically allocated private IP address C. The security groups or network ACLs are not property configured for web traffic. D. The load balancer is not configured in a private subnet with a NAT instance. E. The VPC does not have a VGW configured.

B Explanation: Using multipart upload provides the following advantages: - Improved throughput - You can upload parts in parallel to improve throughput. - Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed upload due to a network error. - Pause and resume object uploads - You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload. - Begin an upload before you know the final object size - You can upload an object as you are creating it. http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application? A. Enable enhanced networking B. Use Amazon S3 multipart upload C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency. D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

A & C Explanation: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnectin g.html

You try to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: "Network error: Connection timed out" or "Error connecting to [instance], reason: -> Connection timed out: connect," You have confirmed that the network and security group rules are configured correctly and the instance is passing status checks. What steps should you take to identify the source of the behavior? Choose 2 answers A. Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch. B. Verify that your IAM user policy has permission to launch Amazon EC2 instances. C. Verify that you are connecting with the appropriate user name for your AMI. D. Verify that the Amazon EC2 Instance was launched with the proper IAM role. E. Verify that your federation trust to AWS has been established.

B Assign an elastic IP. Uncle Jed said so lol Explanation: You launched your instance into a public subnet - a subnet that has a route to an Internet gateway. However, the instance in your subnet also needs a public IP address to be able to communicate with the Internet. By default, an instance in a nondefault VPC is not assigned a public IP address. In this step, you'll allocate an Elastic IP address to your account, and then associate it with your instance.

You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access? A. Deploy a NAT instance into the public subnet. B. Assign an Elastic IP address to the fourth instance. C. Configure a publically routable IP Address in the host OS of the fourth instance. D. Modify the routing table for the public subnet.

B EBS-optimized instances deliver dedicated bandwidth to Amazon EBS, with options between 425 Mbps and 14,000 Mbps, depending on the instance type you use. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html#ebs-optimization-support

You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write performance Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution? A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB. B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput. C. Small block sizes cause performance degradation, limiting the I'O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput. D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS. E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.

B TODO; research answer

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 1O0K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements? A. Add an SOS queue to the ingestion layer to buffer writes to the RDS instance B. Ingest data into a DynamoDB table and move old data to a Redshift cluster C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

D Disabling Source/Destination Checks Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance. You can disable the SrcDestCheck attribute for a NAT instance that's either running or stopped using the console or the command line. http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

You manually launch a NAT AMI in a public subnet. The network is properly configured. Security groups and network access control lists are property configured. Instances in a private subnet can access the NAT. The NAT can access the Internet. However, private instances cannot access the Internet. What additional step is required to allow access from the private instances? A. Enable Source/Destination Check on the private Instances. B. Enable Source/Destination Check on the NAT instance. C. Disable Source/Destination Check on the private instances. D. Disable Source/Destination Check on the NAT instance.

B TODO: research https://aws.amazon.com/dynamodb/faqs/ Q: Can a global secondary index key be defined on non-unique attributes? Yes. Unlike the primary key on a table, a GSI index does not require the indexed attributes to be unique. Q: Are GSI key attributes required in all items of a DynamoDB table? No. GSIs are sparse indexes. Unlike the requirement of having a primary key, an item in a DynamoDB table does not have to contain any of the GSI keys. If a GSI key has both hash and range elements, and a table item omits either of them, then that item will not be indexed by the corresponding GSI. In such cases, a GSI can be very useful in efficiently locating items that have an uncommon attribute.

You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible? A. Use RDS Multi-AZ with two tables, one for -Active calls" and one for -Terminated calls". In this way the "Active calls_ table is always small and effective to access. B. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'" attribute that is present for active calls only In this way the Global Secondary index is sparse and more effective. C. Use DynamoDB with a 'Calls" table and a Global secondary index on a 'State" attribute that can equal to "active" or "terminated" in this way the Global Secondary index can be used for all Items in the table. D. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal to 'ACTIVE" or -TERMINATED" In this way the SOL query Is optimized by the use of the Index.

A Explanation: A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content.

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this? A. Remove public read access and use signed URLs with expiry dates. B. Use CloudFront distributions for static content. C. Block the IPs of the offending websites in Security Groups. D. Store photos on an EBS volume of the web server.

A todo: why?

Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database. In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements? A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process

B TODO: research

Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform? A. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster. B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR. C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance. D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.

B TODO; research answer https://aws.amazon.com/sqs/faqs/ There is no limit on the number of messages that can be pushed onto SQS. The retention period of the SQS is 4 days by default and it can be changed to 14 days. This will make sure that no writes are missed.

Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use? A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database. C. Amazon ElastiCache to store the writes until the writes are committed to the database. D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.

B. Amazon Kinesis Streams allows for real-time data processing. With Amazon Kinesis Streams, you can continuously collect data as it is generated and promptly react to critical information about your business and operations. https://aws.amazon.com/kinesis/streams/

Your customer is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer's requirements? A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics. B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs

A TODO: why?

Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database Which backup architecture will meet these requirements? A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore. C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.

C TODO: look into this question

Your firm has uploaded a large amount of aerial image data to S3 In the past, in your onpremises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ -An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct? A. Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage. B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed, C. Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS Once data is processed, change the storage class of the S3 objects to Glacier. D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

C. Elastic Beanstalk is good for dev and test environment, but not for production environment, so A is not correct. As explain here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html At option D, all instances in the subnet can access RDS, while at option C, only specified instances in the new security group can access RDS. So C is the correct answer.

Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following. A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. B. Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block. C. Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. D. Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.

Answer: C. The ELB stops sending traffic to the instance that failed its health check.

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true? A. The instance is replaced automatically by the ELB. B. The instance gets terminated automatically by the ELB. C. The ELB stops sending traffic to the instance that failed its health check. D. The instance gets quarantined by the ELB for root cause analysis.


Related study sets

05 - LINKED LIST & 07 - Linked stack & 08 - Linked Queue

View Set

Palabras con diptongos ai (ay), ei (ey), iu, oi (oy), ui (uy)

View Set

1.05: Analyze a Speaker's Argument

View Set

FCE Use of English Part 2 (Open Cloze)

View Set

ICORE Leadership/Management (Z370)

View Set

Research Methods Naturalistic Observations

View Set

Government Quiz: Forms of Government

View Set