AWS Certified Solutions Architect Professional (+ review)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

CF: returns the URL of a load balanced web site created in CloudFormation with an AWS::ElasticLoadBalancing::LoadBalancer resource name "ElasticLoad Balancer

"Fn::Join" : ["". [ "http://", {"Fn::GetAtt" : [ "ElasticLoadBalancer","DNSName"]}]]

DDB: What is the data model of DynamoDB

"Table", a collection of Items; "Items", with Keys and one or more Attribute; and "Attribute", with Name and Value.

SWF: . appropriate use cases for SWF with Amazon EC2

(1) Video encoding using Amazon S3 and Amazon EC2, large videos are uploaded to Amazon S3 in chunks. Application is built as a workflow where each video file is handled as one workflow execution, and (2) Processing large product catalogs using Amazon Mechanical Turk. While validating data in large catalogs, the products in the catalog are processed in batches. Different batches can be processed concurrently (orchestrating batching)

IAM: default maximum number of MFA devices in use

1

IAM: limitations

100 groups per AWS account, 5000 IAM users, 250 roles

VPC: a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25 and a private subnet with CIDR 20.0.0.128/25,

20.0.0.255 can't be assigned to an instance

VPC: web tier will use an Auto Scaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible.

4 subnets required (2 public subnets for web instances in multiple AZs and 2 private subnets for RDS Multi-AZ)

EC2: web application where web servers on EC2, Auto Scaling group, monitoring over the last 6 months shows that 6 web servers are necessary to handle the minimum load. During the day up to 12 servers are needed Five to six days per year, the number of web servers required might go up to 15, optimize for cost and HA

6 Reserved instances (heavy utilization). 6 Reserved instances (medium utilization), rest covered by On-Demand instances (don't go for spot as availability not guaranteed)

VPC: development environment needs a source code repository, a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from, concerns are cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises, and goal is to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day

A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

SNS: A user wants to make so that whenever the CPU utilization of the AWS EC2 instance is above 90%, the redlight of his bedroom turns on.

AWS CloudWatch + AWS SNS

IAM: AWS Management Console access to developers, mandates identity federation and role-based access control, roles assigned by groups in Active Directory

AWS Directory Service AD Connector, AWS identity and Access Management roles

ES: ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first

AWS Elasticsearch Service

IAM: Is targeted at organizations with multiple users or systems that use AWS products

AWS Identity and Access Management

IAM: Enables AWS customers to manage users and permissions

AWS Identity and Access Management (IAM)

Kinesis: replicate API calls across two systems in real time

AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)

S3: An application is generating a log file every 5 minutes. The log file is not critical but may be required only for verification in case of some major issue. The file should be accessible over the internet whenever required.

AWS S3 RRS

S3: A bucket owner has allowed another account's IAM users to upload or access objects in his bucket. The IAM user of Account A is trying to access an object created by the IAM user of account B

AWS S3 will verify proper rights given by the owner of Account A, the bucket owner as well as by the IAM user B to the object

SG: Which of the following services natively encrypts data at rest within an AWS region?

AWS Storage Gateway, Amazon Glacier

WAF: questionable log entries and suspect someone is attempting to gain unauthorized access

Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would then pass the traffic to the current web tier. Web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group

OpsWorks: describe how to add a backend database server to an OpsWorks stack

Add a new database layer and then add recipes to the deploy actions of the database and App Server layers, Set up the connection between the app server and the RDS layer by using a custom recipe. The recipe configures the app server as required, typically by creating a configuration file. The recipe gets the connection data such as the host and database name from a set of attributes in the stack configuration and deployment JSON that AWS OpsWorks installs on every instance, the variables that characterize the RDS database connection—host, user, and so on—are set using the corresponding values from the deploy JSON's [:deploy][:app_name][:database] attributes

S3: 150 PUT requests per second

Add a random prefix to the key names.

Kinesis: Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RSS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application, prevent any data

Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.

IAM: security team would like to be able to delegate user authorization to the individual development teams but independently apply restrictions to the users permissions based on factors such as the users device and location

Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy...policy with deny rules based on location, device and more restrictive wins

ELB & AS: You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check, but these unhealthy instances are not being terminated.

Add an Elastic Load Balancing health check to your Auto Scaling group

VPC: extend two existing data centers into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centers and static content from S3, presently dual-tunnel VPN connection between your CGW and VGW

Add another CGW in a different data center and create another dual-tunnel VPN connection

ELB: web-style application with a stateless but CPU and memory-intensive web tier running on a cc2 8xlarge EC2 instance inside of a VPC. The instance when under load is having problems returning requests within the SLA as defined by your business The application maintains its state in a DynamoDB table, but the data tier is properly provisioned and responses are consistently fast

Add another cc2 8xlarge application instance, and put both behind an Elastic Load Balancer

IAM: Gives the Admins group permission to access all account resources, except your AWS account information

Administrator Access

EC2: Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? EC

All data on instance-store devices will be lost, the underlying host for the instance is changed

VPC: VPC with CIDR 20.0.0.0/16 and a public subnet uses CIDR 20.0.1.0/24, host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306), security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp).

Allow Inbound on port 3306 for Source Web Server Security Group (WebSecGrp)

CW: A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team.

Amazon CloudWatch, Amazon Simple Notification Service

DDB: Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation's, that is, for a few days there are 10 times or even 100 times more activation's than in average day.

Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances

EBS: best for my database-style applications that frequently encounter many random reads and writes across the dataset

Amazon EBS

VPC: VPC with one private subnet and one public subnet with a Network Address Translator (NAT) server. You are creating a group of Amazon Elastic Cloud Compute (EC2) instances that configure themselves at startup via downloading a bootstrapping script from S3 that deploys an application via GIT. Which setup provides the highest level of security?

Amazon EC2 instances in private subnet, no EIPs, route outgoing traffic via the NAT

Services: services allow the customer to retain full administrative privileges of the underlying EC2 instances

Amazon Elastic Map Reduce, AWS Elastic Beanstalk

RDS: company has limited staff and requires high availability

Amazon RDS for MySQL with Multi-AZ

CT: out-of-the-box user configurable automatic backup-as-a-service and backup rotation options

Amazon RDS, Redshift

SQS: How does Amazon SQS allow multiple readers to access the same message queue without losing messages or processing them many times?

Amazon SQS queue has a configurable visibility timeout

DX: app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured

An AWS Direct Connect link between the VPC and the network housing the internal services (VPN or a DX for communication), An IP address space that does not conflict with the one on-premises (IP address cannot conflict), A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can't be configured from scratch)

EBS: moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured?

An AWS Direct Connect link between the VPC and the network housing the internal services (VPN or a DX for communication), An IP address space that does not conflict with the one on-premises (IP address cannot conflict), • A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can't be configured from scratch)

Migration: allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured

An AWS Direct Connect link between the VPC and the network housing the internal services, An IP address space that does not conflict with the one on-premises, A VM Import of the current virtual machine

53: true about Amazon Route 53 resource records

An Alias record can map one DNS name to another Amazon Route 53 DNS name, an Amazon Route 53 CNAME record can point to any DNS record hosted anywhere

EC2: 132. A t2.medium EC2 instance type must be launched with what type (AMI)

An Amazon EBS-backed Hardware Virtual Machine AMI

EC2: is leveraging IAM Roles for EC2 for accessing object stored in S3. Which two of the following IAM policies control access to you S3 objects

An IAM trust policy allows the EC2 instance to assume an EC2 instance role, An IAM access policy allows the EC2 role to access S3 objects

DDB: Global secondary index

An index with a hash and range key that can be different from those on the table

Consolidated Billing: An organization has added 3 of his AWS accounts to consolidated billing. One of the AWS accounts has purchased a Reserved Instance (RI) of a small instance size in the us-east-1a zone. All other AWS accounts are running instances of a small size in the same zone. What will happen in this case for the RI pricing

Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size

ECS: You need a solution to distribute traffic evenly across all of the containers for a task running on Amazon ECS. Your task definitions define dynamic host port mapping for your containers. What AWS feature provides this functionally?

Application Load Balancers support dynamic host port mapping

IAM: access to various AWS services

Assign an IAM role to the Amazon EC2 instance

ELB: a multi-platform web application for AWS. The application will run on EC2 instances and will be accessed from PCs, tablets and smart phones. Separate sticky session and SSL certificate setups are required for different platform types.

Assign multiple ELBs to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type. Session stickiness and SSL termination are done at the ELBs. (Session stickiness requires HTTPS listener with SSL termination on the ELB and ELB does not support multiple SSL certs so one is required for each cert)

IAM: company wants their EC2 instances in the new region to have the same privileges

Assign the existing IAM role to the Amazon EC2 instances in the new region

RDS: an application in US-West (Northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in achieving a low Recovery Point Objective (RPO) for an Amazon RDS multi-AZ MySQL database instance

Asynchronous replication

ELB & AS: a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fleet is high and conversely remove them when CPU utilization is low

Auto Scaling, Amazon CloudWatch and Elastic Load Balancing

EBS: What AWS products and features can be deployed by Elastic Beanstalk

Auto scaling groups, Elastic Load Balancers, RDS Instances

ES: She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services.

Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (AWS Elasticsearch with Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

CW: You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services.

Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK - Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

ELB: A user has configured an HTTPS listener on an ELB. The user has not configured any security policy which can help to negotiate SSL between the client and ELB. What will ELB do in this scenario?

By default ELB will select the latest version of the policy

IAM: policy evaluation logic

By default, all requests are denied, An explicit allow overrides default deny

53: How can the domain's zone apex for example "myzoneapexdomain com" be pointed towards an Elastic Load Balancer

By using an Amazon Route 53 Alias record

VPC: a VPC with CIDR 20.0.0.0/16, user has created one subnet with CIDR 20.0.0.0/16 in this VPC. The user is trying to create another subnet with the same VPC for CIDR 20.0.0.1/24, entries required in the private subnet database security group (DBSecGrp)

CIDR overlaps error

SNS: to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS

Call the CreatePlatformEndpoint API function to register multiple device tokens.

S3: is a valid ID to grant permission to other AWS accounts

Canonical user ID

EMR: A customer's nightly EMR job processes a single 2-TB data file stored on Amazon Simple Storage Service (S3). The Amazon Elastic Map Reduce (EMR) job runs on two On-Demand core nodes and three On-Demand task nodes. Which of the following may help reduce the EMR job completion time?

Change the input split size in the MapReduce job configuration, Adjust the number of simultaneous mapper tasks.

ELB: Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases/ decreases and has been performing well. Your marketing team expects a steady ramp up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks. Your forecast for the approximate number of Amazon EC2 instances necessary to meet the peak demand is 175. What should you do to avoid potential service disruptions during the ramp up in traffic

Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits.

SQS: Distribute static web content to end users with low latency across multiple countries

CloudFront + S3

SQS: Automate the process of sending an email notification to administrators when the CPU utilization reaches 70% on production servers (Amazon EC2 instances)

CloudWatch + SNS + SES

CW:You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account

CloudWatch Events Rules, which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis. (CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tool(s))

AS: Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance

Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance

VPC: public and private subnets using the VPC wizard, CIDR 20.0.0.0/16, public subnet CIDR 20.0.1.0/24, host a web server in the public subnet (port 80), a DB server in the private subnet (port 3306), security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp) - required in the web server security group (WebSecGrp)

Configure Destination as DB Security group ID (DbSecGrp) for port 3306 Outbound

ELB:You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure

Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it. (terminate SSL on the instance using client-side certificate), Configure your Web servers with EIPs. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. (Remove ELB and use Web Servers directly with Route 53)

IAM: Either use frequently rotated passwords or one-time access credentials in addition to username/password

Configure MFA on the root account and for privileged IAM users, Assign IAM users and groups configured with policies granting least privilege access

IAM: preparing for a security assessment of your use of AWS

Configure MFA on the root account and for privileged IAM users, Assign IAM users and groups configured with policies granting least privilege access

DX: You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet as well as from an on-premises network The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements

Configure a single routing table with a default route via the internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.

VPC: designing a data leak prevention solution, VPC Instances to be able to access software depots and distributions on the Internet for product updates, depots and distributions are accessible via third party CDNs by their URLs and you want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet

Configure a web proxy server in your VPC and enforce URL-based rules for outbound access and remove default routes. (Security group and NACL cannot have URLs in the rules nor does the route)

VPC: for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet, not application on the Internet

Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it

IDS/IPS: implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC.

Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection

VPC: in vpc establish separate security zones and enforce network traffic rules across different zones to limit Instance communications...

Configure instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication

IAM: AutoScaling group of EC2 Instances, customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x.509 certificate that contains the specific instanceid, and x.509 certificates must be designed by the customer's Key management service in order to be trusted for authentication

Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance

KMS: that is composed of an Auto Scaling group of EC2 Instances. The customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance-id. In addition an x 509 certificates must be designed by the customer's Key management service in order to be trusted for authentication

Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.

VPC: A user wants to access RDS from an EC2 instance using IP addresses. Both RDS and EC2 are in the same region, but different AZs.

Configure the Private IP of the Instance in RDS security group

S3: While testing the new web fonts, Company ABCD recognized the web fonts are being blocked by the browser

Configure the abcdfonts bucket to allow cross-origin requests by creating a CORS configuration

VPC: highly availabl bastion host ...

Configure the bastion instance in an Auto Scaling group Specify the Auto Scaling group to include multiple AZs but have a min-size of 1 and max-size of 1

VPC: configure instances of the same subnet communicate with each other

Configure the security group itself as the source and allow traffic on all the protocols and ports

ELB: A user has configured a website and launched it using the Apache web server on port 80. The user is using ELB with the EC2 instances for Load Balancing. What should the user do to ensure that the EC2 instances accept requests only from ELB

Configure the security group of EC2, which allows access to the ELB source security group

Authentication: An AWS customer is deploying a web application that is composed of a front-end running on Amazon EC2 and of confidential data that is stored on Amazon S3. The customer security policy that all access operations to this sensitive data must be authenticated and authorized by a centralized access management system that is operated by a separate security team. In addition, the web application team that owns and administers the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system.

Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3 (Controlled access and admins cannot access the data as it needs authentication)

VPC: VPC consisting of an Elastic Load Balancer (ELB), web servers, application servers and a database, only accept traffic from predefined customer IP addresses

Configure your web servers to filter traffic based on the ELB's "X-forwarded-for" header, • Configure ELB security groups to allow traffic from your customers' IPs and implicit deny all outbound traffic (note deny all doesn't work for stateless nacl)

IAM: AWS API access credentials

Console passwords, Access keys & Signing certificates

SQS: a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances, which are used as batch processors. Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms.

Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness

IAM: set up AWS access for each department

Create IAM groups based on the permission and assign IAM users to the groups

IAM: migrate your Development (Dev) and Test environments to AWS, separate accounts, will link each accounts bill to a Master AWS account using Consolidated Billing, administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts

Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access

Consolidated Billing: You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.

Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.

RDS: Large analytics jobs on the database are likely to cause other applications to not be able to get the query results they need to, before time out. Also, as your data grows, these analytics jobs will start to take more time, increasing the negative effect on the other applications

Create RDS Read-Replicas for the analytics work

VPC: migration of an ecommerce platform to 3- tier VPC - IGW ig-2d8bc445 NACL acl-2080c448 Subnets and Route Tables: Web server's subnet-258bc44d Application server's subnet-248DC44c Database server's subnet-9189c6f9 Route Tables: rtb-2i8bc449 rtb-238bc44b Associations: Subnet-258bc44d: rtb-2i8bc449 Subnet-248DC44c: rtb-238bc44b Subnet-9189c6f9: rtb-238bc44b Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet, configuration to allow the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet...

Create a Bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance. (Bastion and NAT should be in the public subnet. As Web Server has direct access to Internet, the subnet subnet-258bc44d should be public and Route rtb-2i8bc449 pointing to IGW. Route rtb-238bc44b for private subnets should point to NAT for outgoing internet access)

CF: deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation

Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda

CF: Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time.

Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests, which can be served late. (CloudFront can serve request from cache and multiple cache behavior can be defined based on rules for a given URL pattern based on file extensions, file names, or any portion of a URL. Each cache behavior can include the CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.)

CF: users upload two million blog entries a month. The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication; this drops to no updates after 6 months

Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors

Migration: You must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location.

Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with aws logs and EB with Docker), Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)

EBS: migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location.

Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker), Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)

IAM: each of 10 IAM users to have access to a separate DynamoDB table all in same group

Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable

53: Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?

Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.

EC2: organization wants to implement two separate SSLs for the separate modules although it is already using VPC

Create a VPC instance, which will have multiple network interfaces with multiple elastic IP addresses

S3: prevent an IP address block from accessing public objects in an S3 bucket

Create a bucket policy and apply it to the bucket

CT: make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible

Create a global AWS CloudTrail Trail. Configure a script to aggregate the log data delivered to S3 once per week and deliver this to the CTO.

S3: object is stored in the Standard S3 storage class and you want to move it to Glacier

Create a lifecycle policy that will migrate it after a minimum of 30 days. (Any object uploaded to S3 must first be placed into either the Standard, Reduced Redundancy, or Infrequent Access storage class. Once in S3 the only way to move the object to glacier is through a lifecycle policy)

CT: a reliable and durable logging solution to track changes made to your EC2, IAM and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend

Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles, S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Single New bucket with global services option for IAM and MFA delete for confidentiality)

EC2: You launch an Amazon EC2 instance without an assigned AWS identity and Access Management (IAM) role. Later, you decide that the instance should be running with an IAM role

Create a new IAM role with the same permissions as an existing IAM role, and assign it to the running instance. (As per AWS latest enhancement, this is possible now)

AS: a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled

Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

VPC: A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks.

Create a new peering connection Between Prod and Dev along with appropriate routes

OpsWorks: You decide to write a script to be run as soon as a new Amazon Linux AMI is released.

Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment), Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

OpsWorks: migration of a highly trafficked node.js application to AWS. In order to comply with organizational standards Chef recipes must be used to configure the application servers that host this application and to support application lifecycle events

Create a new stack within Opsworks add the appropriate layers to the stack and deploy the application

DX: implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct Connect for access to services such as Amazon S3?

Create a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS

!AM: creating a new IAM user which of the following must be done before they can successfully make API calls

Create a set of Access Keys for the user

RDS: user will not use the DB for the next 3 months

Create a snapshot of RDS to launch in the future and terminate the instance now

EBS: How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?

Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ

53: A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer

Create an A record aliased to the load balancer DNS name

EC2: A user has launched a large EBS backed EC2 instance in the US-East-1a region. The user wants to achieve Disaster Recovery (DR) for that instance by creating another small instance in Europe. How can the user achieve DR

Create an AMI of the instance and copy the AMI to the EU region. Then launch the instance from the EU AMI

EC2: A user has launched two EBS backed EC2 instances in the US-East-1a region, how to change the zone of one of the instances.

Create an AMI of the running instance and launch the instance in a separate AZ

IAM: an application deployed on an EC2 instance to write data to a DynamoDB table

Create an IAM Role that allows write access to the DynamoDB table, Add an IAM Role to a running EC2 instance

EC2: items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table...

Create an IAM Role that allows write access to the DynamoDB table, Add an IAM Role to a running EC2 instance. (As per AWS latest enhancement, this is possible now)

IAM: each IAM user accesses the IAM console only within the organization and not from outside

Create an IAM policy with a condition which denies access when the IP address range is not from the organization

EC2: an application running on an EC2 Instance, which will allow users to download files from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3

Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

IAM: Before generating the URL the application should verify the existence of the file in S3

Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

IAM: third-party SaaS application needs assess

Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application

IAM: an Auto Scaling group whose Instances need to insert a custom metric into CloudWatch

Create an IAM role with the Put MetricData permission and modify the Auto Scaling launch configuration to launch instances in that role

IAM: CloudFormation to deploy a three tier web application, DynamoDB for storage, allow the application instance access to the DynamoDB tables without exposing API credentials

Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance

CF: to distribute confidential training videos to employees

Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

RDS: running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required

Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard

SNS: running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team.

Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard

EBS: An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume?

Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.

OpsWorks: with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store, a new chat feature has been implemented in node.js and waits to be integrated in the architecture

Create one AWS Ops Works stack, create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)

RDS: operating at 10% writes and 90% reads, based on your logging

Create read replicas for RDS since the load is mostly reads

Consolidated Billing An organization has 10 departments. The organization wants to track the AWS usage of each department. Which of the below mentioned options meets the requirement

Create separate accounts for each department, but use consolidated billing for payment and tracking

EBS: QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis

Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. (Security group allows instances to access the RDS with new instances launched without any changes)

CT: automate many routine systems administrator backup and recovery activities. Your current plan is to leverage AWS-managed solutions as much as possible and automate the rest with the AWS CLI and scripts

Creating daily EBS snapshots with a monthly rotation of snapshots

DX: a 50-Mbps dedicated and private connection to their VPC

DX

VPC: establishing IPSec tunnels over the internet, using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways

Data encryption across the Internet but not end-to-end.

EC2: When an EC2 instance that is backed by an S3-based AMI Is terminated, what happens to the data on the root volume?

Data is automatically deleted

EC2: When an EC2 EBS-backed instance is stopped, what happens to the data on any ephemeral store volumes?

Data will be deleted and will no longer be accessible

Tags: An organization has launched 5 instances: 2 for production and 3 for testing. The organization wants that one particular group of IAM users should only access the test instances and not the production ones

Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags (possible using ResourceTag condition)

DDB: You are inserting 1000 new items every second in a DynamoDB table. Once an hour these items are analyzed and then are no longer needed. You need to minimize provisioned throughput, storage, and API calls. Given these requirements, what is the most efficient way to manage these Items after the analysis

Delete the table and create a new table per hour

EBS: EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag_____ to false when you launch the instance

DeleteOnTermination

ELB: a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances

Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer

EC: website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency mode, read contention

Deploy ElastiCache in-memory cache running in each availability zone, Add an RDS MySQL read replica in each availability zone

RDS: a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model, read contention

Deploy ElastiCache in-memory cache running in each availability zone, Add an RDS MySQL read replica in each availability zone

VPC: running a multi-tier web application farm (VPC) not connected to their corporate network, presently connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets using bastion-security-group with Microsoft Remote Desktop Protocol (RDP), but wants to further limit administrative access to all of the instances in the VPC...

Deploy a Windows Bastion host with an auto-assigned Public IP address in the public subnet, and allow RDP access to the bastion from only the corporate public IP addresses

AS: critical two tier web app currently deployed in two availability zones in a single region, using Elastic Load Balancing and Auto Scaling, synchronous replication (very low latency connectivity) at the database layer, application needs to remain fully available even if one application Availability Zone goes off-line, and Auto scaling cannot launch new instances in the remaining Availability Zones

Deploy in three AZs, with Auto Scaling minimum set to handle 50% peak load per zone

VPC: a VPC with CIDR 20.0.0.0/16 using the wizard, a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user's data centre, data centre has CIDR 172.28.0.0/12 and a NAT instance (i-123456) to allow traffic to the internet from the VPN subnet

Destination: 0.0.0.0/0 and Target: i-12345, Destination: 172.28.0.0/12 and Target: vgw-12345, • Destination: 20.0.0.0/16 and Target: local (not Destination: 20.0.1.0/24 and Target: i-12345)

VPC: a VPC with public and private subnets using the VPC wizard, CIDR 20.0.0.0/16, private subnet CIDR 20.0.0.0/24, NAT instance ID is i-a12345, entries required in the main route table attached with the private subnet to allow instances to connect with the internet

Destination: 0.0.0.0/0 and Target: i-a12345

VPC: a VPC with CIDR 20.0.0.0/16 using the wizard, public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user's data centre

Destination: 0.0.0.0/0 and Target: vgw-12345 is valid for main route table

CF: outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass, using Docker to get high consistency between Staging and Production for the application environment on your EC2 instances, there are many service components you may use beyond EC2 virtual machines

Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. (Only CloudFormation's JSON Templates allow declarative version control of repeatedly deployable models of entire AWS clouds),

DX: operational concern should drive an organization to consider switching from an Internet-based VPN connection to AWS Direct Connect

Direct Connect provides greater bandwidth than an Internet-based VPN connection.

SWF: A collection of related Workflows

Domain

DDB: shared data store with durability and low latency

DyanomoDB

RDS: run a database in an Amazon instance, recommended Amazon storage option

EBS

EBS: launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 Instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The two EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4000 16KB reads or writes) for a total of 16,000 random IOPS on the instance. The EC2 Instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPS like the original four for a total of 24,000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution?

EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput. (EC2 Instance types have limit on max throughput and would require larger instance types to provide 24000 IOPS)

EC2: when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. You can change the settings of the _____ Service to set the drive letters of the EBS volumes per your specifications.

EC2Config Service

ELB: A customer has an online store that uses the cookie-based sessions to track logged-in customers. It is deployed on AWS using ELB and autoscaling. When the load increases, Auto scaling automatically launches new web servers, but the load on the web servers do not decrease. This causes the customers a poor experience.

ELB is configured to send requests with previously established sessions

ELB: A customer has a web application that uses cookie based sessions to track logged in users. It is deployed on AWS using ELB and Auto Scaling. The customer observes that when load increases Auto Scaling launches new Instances but the load on the existing Instances does not decrease, causing all existing users to have a sluggish experience.

ELB's behavior when sticky sessions are enabled causes ELB to send requests in the same session to the same backend, the web application uses long polling such as comet or websockets. Thereby keeping a connection open to a web server for a long time

Services: services provide root access

Elastic Beanstalk, EC2, Opswork

ELB & AS: leverage to enable an elastic and scalable web tier

Elastic Load Balancing, Amazon EC2, and Auto Scaling

CF: serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

ET: Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video.

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

IAM: Admin control of resources

Enable IAM cross-account access for all corporate IT administrators in each child account, Use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account

Consolidated Billing: . A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions.

Enable IAM cross-account access for all corporate IT administrators in each child account. (provides IT goverance), use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account (will provide cost oversight)

CF: to deliver high-definition raw video for preproduction and dubbing to customer all around the world, and they require the ability to limit downloads per customer and video file to a configurable number

Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and return the content S3 URL unless the download limit is reached, Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in RDS, and return a dynamically signed URL unless the download limit is reached.

ELB: customer needs to capture all client connection information from their load balancer every five minutes

Enable access logs on the load balancer

DX: A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter.

Enable route propagation to the Virtual Private Gateway (VGW), Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment

S3: A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits

Enable server access logging for all required Amazon S3 buckets

S3: features helps to prevent and recover from accidental data loss

Enable versioning on your S3 Buckets, • Configure your S3 Buckets with MFA delete

EC2: to ensure the highest network performance (packets per second), lowest latency, and lowest jitter

Enhanced networking (provides network • Amazon VPC (Enhanced networking works only in VPC)

RDS: The easiest way to find out if an error occurred is to look for an __________ node in the response from the Amazon RDS API.

Error

DS: heavily dependent on low latency connectivity to LDAP for authentication. Your security policy requires minimal changes to the company's existing application user management processes.

Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication (RODCs low latency and minimal setup)

DX: A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increase the fault tolerance of the connection to VPC-1?

Establish a hardware VPN over the internet between VPC-1 and the on-premises network, • Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

SQS: You are building an online store on AWS that uses SQS to process your customer orders. Your backend system needs those messages in the same sequence the customer orders have been put in.

FIFO

ELB: migrating a legacy client-server application, specific DNS domain, a 2-tier architecture, multiple application servers and a database server, remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database.

File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs. (ELB with TCP listener and proxy protocol will allow IP to be passed )

Glacier: Each drug trial test may generate up to several thousands of files, with compressed file sizes ranging from 1 byte to 100MB. Once archived, data rarely needs to be restored, and on the rare occasion when restoration is needed, the company has 24 hours to restore specific files that match certain metadata. Searches must be possible by numeric file ID, drug name, participant names, date ranges, and other metadata. Which is the most cost-effective architectural approach that can meet the requirements

First, compress and then concatenate all files for a completed drug trial test into a single Amazon Glacier archive. Store the associated byte ranges for the compressed files along with other search metadata in an Amazon RDS database with regular snapshotting. When restoring data, query the database for files that match the search criteria, and create restored files from the retrieved byte ranges.

CF: Intrinsic Functions

Fn::Base64, Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Select

VPC: user created vpc with public and private subnets, CIDR 20.0.1.0/24 and the public subnet uses CIDR 20.0.0.0/24, host a web server public subnet (port 80) and a DB server private subnet (port 3306), NAT instance, for NAT security use

For Inbound allow Source: 20.0.1.0/24 on port 80, For Outbound allow Destination: 0.0.0.0/0 on port 80, For Outbound allow Destination: 0.0.0.0/0 on port 443 (not Inbound allow Source: 20.0.0.0/24 on port 80)

VPC: CIDR 20.0.0.0/16 (public and private), to host a web server in the public subnet port 80 (CIDR 20.0.0.0/24) and a DB server in the private subnet port 3306 (CIDR 20.0.0.0/24) to configure a security group of the NAT instance...

For Outbound allow Destination: 0.0.0.0/0 on port 80 & 443, For Inbound allow Source: 20.0.1.0/24 on port 80 (Allow inbound HTTP traffic from servers in the private subnett)

ELB: AWS Elastic Load Balancer supports SSL termination

For all regions

RDS: a multi-regional deployment, a 3-tier architecture and currently uses MySQL 5.6 for data persistence, each region has deployed its own database, In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics

For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region

RDS: In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free storage space?

FreeStorageSpace

SQS: How long can you keep your Amazon SQS messages in Amazon SQS queues?

From 60 secs up to 2 weeks

DDB: store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game.

GameID as the hash key, HighestScore as the range key. (hash (partition) key should be the GameID, and there should be a range key for ordering HighestScore. Refer link)

SG: . A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume, constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data

Gateway-Cached volumes with snapshots scheduled to Amazon S3

EC2: instance types are available as Amazon EBS-backed

General purpose (lowest cost) T2, Compute-optimized C4

AS: determine total number of instances in an Auto Scaling group including pending, terminating and running instances

GroupTotalInstances

SNS: valid SNS delivery transports

HTTP, SMS

EC2: database requires random read IO disk performance up to a 100,000 IOPS at 4KB block side per node

High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage

ELB: A user has configured ELB with a TCP listener at ELB as well as on the back-end instances. The user wants to enable a proxy protocol to capture the source and destination IP information in the header.

If the end user is requesting behind a proxy server then the user should not enable a proxy protocol on ELB (double proxy)

RDS: When should I choose Provisioned IOPS over Standard RDS storage

If you use production online transaction processing (OLTP) workloads

IDS/IPS:

Implement IDS/IPS agents on each Instance running In VPC, Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server

EBS: encryption of sensitive data at rest.

Implement third party volume encryption tools, Encrypt data inside your applications before storing it on EBS, Encrypt data using native data encryption drivers at the file system level

EC2: web front-end utilizes an Elastic Load Balancer and Auto scaling across 3 availability zones. During peak load web servers operate at 90% utilization and leverage a combination of heavy utilization reserved instances for steady state load and on-demand and spot instances for peak load, create a cost effective architecture to allow the application to recover quickly in the event that an availability zone is unavailable during peak load

Increase auto scaling capacity and scaling thresholds to allow the web-front to cost-effectively scale across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability. (Ideal for HA to reduce and distribute load)

AS: A sys admin is maintaining an application on AWS. The application is installed on EC2 and user has configured ELB and Auto Scaling. Considering future load increase, the user is planning to launch new servers proactively so that they get registered with ELB

Increase the desired capacity of the Auto Scaling group

EC: using ElastiCache Memcached to store session state and cache database queries in your infrastructure. You notice in CloudWatch that Evictions and Get Misses are both very high

Increase the number of nodes in your cluster, Increase the size of the nodes in the cluster

DDB: at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling

Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)

EMR: a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend, You also need to store sensor data for at least two years

Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)

RDS: Read Replicas require a transactional storage engine and are only supported for the _________ storage engine

InnoDB

ELB: A user has launched an ELB which has 5 instances registered with it. The user deletes the ELB by mistake. What will happen to the instances?

Instances will keep running

EC2: application requires disk performance of at least 100,000 IOPS in addition; the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3TB...

Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous block-level replication to an identically configured Instance in us-east-1b (ex, another AV)

SG: What does the AWS Storage Gateway provide

It allows to integrate on-premises IT environments with Cloud Storage

VPC: VPC with CIDR 20.0.0.0/16. created one subnet with CIDR 20.0.0.0/16 by mistake, trying to create another subnet of CIDR 20.0.0.1/24

It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created

VPC: a VPC with CIDR 20.0.0.0/16, a private subnet (20.0.1.0/24) and a public subnet (20.0.0.0/24), data centre has CIDR of 20.0.54.0/24 and 20.1.0.0/24, private subnet wants to communicate with the data centre

It will allow traffic with data centre on CIDR 20.1.0.0/24 but does not allow on 20.0.54.0/24 (CIDR block is overlapping)

AS: an Auto Scaling group with default configurations from CLI. The user wants to setup the CloudWatch alarm on the EC2 instances, which are launched by the Auto Scaling group. The user has setup an alarm to monitor the CPU utilization every minute

It will fetch the data at every minute as detailed monitoring on EC2 will be enabled by the default launch configuration of Auto Scaling

VPC: A user has created a public subnet with VPC and launched an EC2 instance within it

It will not allow the user to delete the subnet until the instances are terminated

ELB: A user is configuring the HTTPS protocol on a front end ELB and the SSL protocol for the back-end listener in ELB

It will not allow you to create this configuration (Will give error "Load Balancer protocol is an application layer protocol, but instance protocol is not. Both the Load Balancer protocol and the instance protocol should be at the same layer. Please fix.")

Kinesis: real-time processing of these coordinates from multiple consumers

Kinesis

Kinesis: perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first

Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)

RDS: multi-tier web application on AWS, add a reporting tier to the application, reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web applications database, you have Multi-AZ RDS MySQL instance for the database tierand ElastiCache as a database caching layer between the application tier and database tier.

Launch a RDS Read Replica connected to your Multi-AZ master database and generate reports by querying the Read Replica.

EC2: host a web server as well as an app server on a single EC2 instance, which is a part of the public subnet of a VPC, setup to have two separate public IPs and separate security groups for both the application as well as the web server

Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them (AWS cannot assign public IPs for instance with multiple ENIs)

SG: A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data

Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot, Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance, Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot

AS: leverage Amazon VPC EC2 and SQS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS

Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SQS queue size

EBS: A user has deployed an application on an EBS backed EC2 instance. For a better performance of application, it requires dedicated EC2 to EBS traffic. How can the user achieve this?

Launch the EC2 instance as EBS optimized with PIOPS EBS

Tags: administrator mistakenly terminated several production EC2 instances

Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources. (Identify production resources using tags and add explicit deny)

EBS: lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data

Maintain a single snapshot the latest snapshot is both Incremental and complete

AS: one web application where they have an Elastic Load Balancer in front of web instances in an Auto Scaling Group. When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B. There are zero unhealthy instances

Make sure Auto Scaling is configured to launch in both AZs

SWF: use cases are Simple Workflow Service (SWF) and Amazon EC2 are appropriate solutions

Managing a multi-step and multi-decision checkout process of an e-commerce website, Orchestrating the execution of distributed and auditable business processes

DDB: use cases for Amazon DynamoDB

Managing web sessions, Storing JSON documents, Storing metadata for Amazon S3 objects, massive amount of "hot" data and require very low latency, a rapid ingestion of clickstream in order to collect data about user behavior

CF: You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use

Model an AWS EMR job in AWS CloudFormation. (EMR cluster can be modeled using CloudFormation)

AS: your application is scaling up and down multiple times in the same hour

Modify the Amazon CloudWatch alarm period that triggers your auto scaling scale down policy, Modify the Auto scaling group cool down timers

VPC: numerous port scans coming in from a specific IP address block

Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block

EBS: running a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence At times throughout the day, you are seeing large variance in the response times of the database queries Looking into the instance with the isolate command you see a lot of wait time on the disk volume that the database's data is stored on.

Move the database to an EBS-Optimized Instance, Use Provisioned IOPs EBS

RDS: running a database on an EC2 instance, data stored on Elastic Block Store (EBS) for persistence, large variance in the response times of the database queries, with instance isolate command you see a lot of wait time on the disk volume that the database's data is stored on

Move the database to an EBS-Optimized Instance, Use Provisioned IOPs EBS

CF: Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update and deletion of CloudFormation stacks

Network stack updates will fail upon attempts to delete a subnet with EC2 instances (Subnets cannot be deleted with instances in them), Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group (IAM permissions need to be given explicitly to launch instances )

EC2: move a Reserved Instance from one Region to another

No

IAM: Every user you create in the IAM system starts with

No permissions

EC2: George has shared an EC2 AMI created in the US East region from his AWS account with Stefano. George copies the same AMI to the US West region. Can Stefano access the copied AMI of George's account from the US West region

No, copy AMI does not copy the permission

EC: Which statement best describes ElastiCache

Offload the read traffic from your database in order to reduce latency caused by read-heavy workload

CF: a large burst in web traffic, quickly improve your infrastructures

Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.

VPC: user change the size of the VPC

Old Answer - It is not possible to change the size of the VPC once it has been created (NOTE - You can now increase the VPC size)

DDB: stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB

One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)

EC2: . In order to optimize performance for a compute cluster that requires low inter-node latency, which feature in the following list should you use

Placement Groups

S3: best suitable to allow access to the log bucket

Provide ACL for the logging group

DX: Create a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS

Provision a VPN connection between a VPC and existing on -premises equipment, submit a DirectConnect partner request to provision cross connects between your data center and the DirectConnect location, then cut over from the VPN connection to one or more DirectConnect connections as needed.

EC2: software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing, minimize cost

Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. (Because the instance will always be online during the day, in a predictable manner, and there are sequences of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "Full" level utilization purchases on EC2.)

DDB: , a secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support ____ operations

Query

EC2: How can software determine the public and private IP addresses of the Amazon EC2 instance that it is running on?

Query the local instance metadata. The base URI for all requests for instance metadata is http://169.254.169.254/latest/

RBS: Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes

RDS automated backup

Redshift: 787. With which AWS services CloudHSM can be used

RDS, Amazon Redshift

EC: suitable for storing session state data

RDS, DyanmoDB, Elasticache

CF: a failure state in AWS CloudFormation

ROLLBACK_IN_PROGRESS means an UpdateStack operation failed and the stack is in the process of trying to return to the valid, pre-update state

S3: the web application allow users to upload large files while resuming and pausing the upload as needed, files are uploaded to your php front end backed by Elastic Load Balancing and an autoscaling fleet of amazon elastic compute cloud (EC2) instances that scale upon average of bytes received (NetworkIn). After a file has been uploaded. it is copied to amazon simple storage service(S3). Amazon EC2 instances use an AWS Identity and Access Management (AMI) role that allows Amazon S3 uploads, scale have increased significantly, forcing you to increase the auto scaling groups Max parameter, optimize

Re-architect your ingest pattern, have the app authenticate against your identity provider as a broker fetching temporary AWS credentials from AWS Secure token service (GetFederation Token). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic that used the S3 multipart upload API to directly upload the file to Amazon S3 using the given credentials and S3 Prefix. (multipart allows one to start uploading directly to S3 before the actual size is known or complete data is downloaded)

ELB: ensure that load-testing HTTP requests are evenly distributed across the four web servers

Re-configure the load-testing software to re-resolve DNS for each web request, Use a third-party load-testing service which offers globally distributed test clients

RDS: A user is planning to set up the Multi-AZ feature of RDS. Which of the below mentioned conditions won't take advantage of the Multi-AZ feature

Region outage

S3: You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business.

Remove public read access and use signed URLs with expiry dates

EC2: 256. What does the following command do with respect to the Amazon EC2 security groups? ec2-revoke RevokeSecurityGroupIngress

Removes one or more rules from a security group

RDS: It is advised that you watch the Amazon CloudWatch "_____" metric (available via the AWS Management Console or Amazon Cloud Watch APIs) carefully and recreate the Read Replica should it fall behind due to replication errors.

Replica Lag

VPC: a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host.

Requirements can be met by using Security Group Inbound Rule: Protocol - TCP. Port Range - 22, Source 72.34.51.100/32

SQS: When a Simple Queue Service message triggers a task that takes 5 minutes to complete, which process below will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing?

Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue

IAM: The organization wants that each user can change their password but cannot change their access keys.

Root account owner can set the policy from the IAM console under the password policy screen

VPC: EC2 instances are behind a public facing ELB, autoscaling, application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size, need s to communicate with the payment service over the Internet, which requires whitelisting of all public IP addresses (4 whitelisting IP addresses are allowed at a time and can be added through an API).

Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances (NAT used not just for patching)

53: You need to create a simple, holistic check for your system's general availability and uptime. Your system presents itself as an HTTP-speaking API. What is the simplest tool on AWS to achieve this with

Route53 Health Checks

EBS: There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue?

S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes. (EBS volume snapshots are stored in S3. If S3 is unavailable, snapshots are unavailable)

S3: enabled server side encryption with S3

S3 manages encryption and decryption automatically

SNS: SNS unable to send a notification

SES

RDS: services will help the admin setup notifications

SNS

SQS: required to send the data to a NoSQL database. The user wants to decouple the data sending such that the application keeps processing and sending data but does not wait for an acknowledgement of DB

SQS

SQS: service can help design architecture to persist in-flight transactions

SQS

SQS: a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS

SQS helps to facilitate horizontal scaling of encoding tasks

SQS: A user has created a photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly.

SQS makes process scalable

SQS: Create a video transcoding website where multiple components need to communicate with each other, but can't all process the same amount of work simultaneously (SQS provides loose coupling)

SQS user case

SQS: Coordinate work across distributed web services to process employee's expense reports

SWF - Steps in order and might need manual steps

AS: highest on Thursday and Friday between 8 AM to 6 PM

Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday

RDS: in a public subnet

Security risk...Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable. DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created.

Kinesis: consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics

Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)

CW:you also need to watch the watcher -the monitoring instance itself - and be notified if it becomes unhealthy.

Set a CloudWatch alarm based on EC2 system and instance status checks and have the alarm notify your operations team of any detected problem with the monitoring instance.

S3: restrict access to data

Set an S3 ACL on the bucket or the object, Set an S3 bucket policy

S3: protection against accidental loss of data stored in Amazon S3

Set bucket policies to restrict deletes, and also enable versioning

S3: serve static assets for your public-facing web application

Set permissions on the object to public read during upload, Configure the bucket policy to set all objects to public read

S3: user wants to make the objects public

Set the AWS bucket policy which marks all objects as public

ELB & AS: You are responsible for a web application that consists of an Elastic Load Balancing (ELB) load balancer in front of an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances. For a recent deployment of a new version of the application, a new Amazon Machine Image (AMI) was created, and the Auto Scaling group was updated with a new launch configuration that refers to this new AMI. During the deployment, you received complaints from users that the website was responding with errors. All instances passed the ELB health checks.

Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail, Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration.

SQS: How do you configure SQS to support longer message retention

Set the MessageRetentionPeriod attribute using the SetQueueAttributes method

CW: You have been asked to make sure your AWS Infrastructure costs do not exceed the budget set per project for each month.

Set up CloudWatch billing alerts for all AWS resources used by each account, with email notifications when it hits 50%. 80% and 90% of its budgeted monthly spend

VPC: a VPC with CIDR 20.0.0.0/16, ublic and VPN only subnets along with hardware VPN access to connect to the user's datacenter, make it so that all traffic coming to the public subnet follows the organization's proxy policy

Setting the route table and security group of the public subnet which receives traffic from a virtual private gateway

S3: a large amount of aerial image data to S3, have used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite, minimize cost

Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier (Glacier suitable for Tape backup)

EC2: running a batch process on EBS backed EC2 instances. The batch process starts a few instances to process Hadoop Map reduce jobs, which can run between 50 - 600 minutes or sometimes for more time. The user wants to configure that the instance gets terminated only when the process is completed. How can the user configure this with CloudWatch?

Setup the CloudWatch action to terminate the instance when the CPU utilization is less than 5%

SQS: an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase.

Some of the new jobs coming in are malformed and unprocessable. (As other options would cause the job to stop processing completely, the only reasonable option seems that some of the recent messages must be malformed and unprocessable)

OpsWorks: OpsWorks, which of the following is not an instance type you can allocate in a stack layer

Spot instances (Does not support spot instance directly but can be used with auto scaling)

CF: Set of AWS resources that are created and managed as a single unit

Stack

EC: read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically.

Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and RDS with read replicas.

EC2: an internal audit and has been determined to require dedicated hardware for one instance (move this instance to single-tenant hardware)...

Stop the instance, create an AMI, launch a new instance with tenancy=dedicated, and terminate the old instance

DDB: a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private.

Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

EC2: one-time launch of an Elastic MapReduce cluster, minimize the costs, ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and run for around four hours resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance

Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.

EMR: the one-time launch of an Elastic MapReduce cluster, minimize the costs, designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance

Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.

S3:the one-time launch of an Elastic MapReduce cluster minimize the costs, cluster ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance

Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.

CF: a video on-demand streaming platform

Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents

EBS: Which procedure for backing up a relational database on EC2 that is using a set of RAIDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?

Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to complete, 4. Resume disk I/O

AS: a memory issue in the application which is causing CPU utilization to go above 90%. The higher CPU usage triggers an event for Auto Scaling as per the scaling policy. If the user wants to find the root cause inside the application without triggering a scaling activity, how can he achieve this

Suspend the scaling process until research is completed

53: Amazon Route 53 can perform health checks and failovers to a backup site in the even of the primary site failure

TRUE

CW: CloudTrail is a batch API call collection service, CloudWatch Events enables real-time monitoring of calls through the Rules object interface

TRUE

EBS: Access the EBS snapshot through Amazon EC2 APIs?

TRUE

EBS: The user will be charged for the volume even if the instance is stopped

TRUE

EBS: Use encrypted EBS volumes so that the snapshot will be encrypted by AWS

TRUE

EBS: restart does not charge for an extra hour, while every stop/start it will be charged as a separate hour

TRUE

EC2: In Amazon VPC, an instance does NOT retain its private IP addresses when the instance is stopped

TRUE

EC2: When you view the block device mapping for your instance, you can see only the EBS volumes, not the instance store volumes.

TRUE

OpsWorks: AWS Config tools does not directly support AWS OpsWorks, for monitoring your stacks

TRUE

OpsWorks: Stacks have many layers, layers have many instances

TRUE

S3: Both the object and bucket can have ACL but folders cannot have ACL

TRUE

SWF: SWF tasks are assigned once and never duplicated, SWF workflow executions can last up to a year, SWF uses deciders and workers to complete tasks

TRUE

EBS: a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact. Which of the following backup methods will best fulfill your requirements?

Take periodic snapshots of the EBS volume

VPC: deployed a three-tier web application in a VPC with a CIDR block of 10.0.0.0/28, initially deploy two web servers, two application servers, two database servers and one NAT instance to a total of seven EC2 instances. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch.

The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches, AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances

S3: Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storage Service (S3) so they can be transcoded into a different format. She creates AWS Identity and Access Management (IAM) users for her application developers, and in just one week, they have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is given a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] }

The application is not using valid security credentials to generate the pre-signed URL, The pre-signed URL has expired

RDS: to an RDS (Relational Database Service) multi-Availability Zone deployment if the primary DB instance fails

The canonical name record (CNAME) is changed from primary to standby

EC2: A user has launched an EC2 instance from an instance store backed AMI. If the user restarts the instance, what will happen to the ephemeral storage data?

The data is preserved

ELB & AS: A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling AddToLoadBalancer, which adds instances to the load balancer. process for a while. What will happen to the instances launched during the suspension period

The instances will not be registered with ELB and the user has to manually register when the process is resumed

VPC: A user has created a VPC with a subnet and a security group. The user has launched an instance in that subnet and attached a public IP. The user is still unable to connect to the instance. The internet gateway has also been created.

The internet gateway is not configured with the route table

RDS: three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck

The number of outstanding IOs waiting to access the disk, The amount of write latency, The amount of time a Read Replica DB Instance lags behind the source DB Instance

RDS: promote one of them, what happens to the rest of the Read Replicas

The remaining Read Replicas will still replicate from the older master DB Instance

A user has enabled session stickiness with ELB. The user does not want ELB to manage the cookie; instead he wants the application to manage the cookie. What will happen when the server instance, which is bound to a cookie, crashes?

The session will not be sticky until a new cookie is inserted

EBS: Why are more frequent snapshots of EBS Volumes faster?

The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.

CF: The user wants the stack creation of ELB and AutoScaling to wait until the EC2 instance is launched and configured properly

The user can use the WaitCondition resource to hold the creation of the other dependent resources

EC2: an application, which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data

The user should attach an IAM role with DynamoDB access to the EC2 instance

IAM: application on EC2, makes calls to DynamoDB to fetch certain data, uses DynamoDB SDK to connect with from the EC2 instance

The user should attach an IAM role with DynamoDB access to the EC2 instance

EC2: user is running one instance for only 3 hours every day. The user wants to save some cost with the instance...

The user should not use RI; instead only go with the on-demand pricing (seems question before the introduction of the Scheduled Reserved instances in Jan 2016, which can be used in this case)

SG: You have a proprietary data store on-premises that must be backed up daily by dumping the data store contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1 hour. Your on-premise network connection is capable of sustaining 1gbps to AWS. Which backup methods to AWS would be most cost-effective while still meeting all of your requirements

Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier (Store in S3 for seven days and then archive to Glacier)

S3: a proprietary data store on-premises that must be backed up daily by dumping the data store contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours, all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1 hour, on-,premise network connection is capable of sustaining 1gbps to AWS, cost effective

Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier (Store in S3 for seven days and then archive to Glacier) Note:Storage Gateway with Gateway-Cached Volumes isn't cost effective.

S3: S3 data is automatically replicated between multiple facilities within an AWS Region

True

IAM: policy used for cross account access

Trust policy, Permissions Policy

VPC: design a VPC for a web-application consisting of an ELB a fleet of web application servers, and an RDS DB. The entire infrastructure must be distributed over 2 AZ

Two Public Subnets for ELB, two private Subnet for the web-servers, and two private subnet for the RDS

VPC: a VPC for a web-application consisting of an Elastic Load Balancer (ELB), a fleet of web/application servers, and an RDS database The entire Infrastructure must be distributed over 2 availability zones.

Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS

DX: from dynamically routed VPN, you provisioned a Direct Connect connection and would like to start using the new connection. After configuring Direct Connect settings in the AWS Console

Update your VPC route tables to point to the Direct Connect connection configure your Direct Connect router with the appropriate settings verify network traffic is leveraging Direct Connect and then delete the VPN connection

S3: 314. A media company produces new video files on-premises every day with a total size of around 100GB after compression. All files have a size of 1-2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am. Current upload takes almost 3 hours, although less than half of the available bandwidth is used

Upload the files in parallel to S3 using multipart upload

CF: To enable end-to-end HTTPS connections from the user's browser to the origin via CloudFront, which of the following options are valid

Use 3rd-party CA certificate in the origin and CloudFront default certificate in CloudFront, Use 3rd-party CA certificate in both origin and CloudFront (Origin cannot be self signed, CloudFront cert cannot be applied to origin)

EC2: You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group.

Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. (For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case)

CF: want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time

Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.

Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements

Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. Create a 'Lastupdated' attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.

S3: data is encrypted at rest

Use Amazon S3 server-side encryption with AWS Key Management Service managed keys, Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key

SWF: data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUD across a cluster of servers with low latency networking

Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta-data. Use an autoscaling group of G2 instances in a placement group. (Human and automated assessments with GPU and low latency networking)

CF: automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations

Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. (CloudFormation allows source controlled, declarative templates as the basis for stack automation and Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed)

EC: to support a 24-hour flash sale, which one of the following methods best describes a strategy to lower the latency while keeping up with unusually heavy traffic

Use ElastiCache as in-memory storage on top of DynamoDB to store user sessions (scalable, faster read/writes and in memory storage)

ECS: Your security team requires each Amazon ECS task to have an IAM policy that limits the task's privileges to only those required for its use of AWS services. How can you achieve this?

Use IAM roles for Amazon ECS tasks to associate a specific IAM role with each ECS task definition

VPC: patch update, private subnet connect to the internet

Use NAT with an elastic IP

53: Your company is moving towards tracking web page users with a small tracking image loaded on each page. Currently you are serving this image out of us-east, but are starting to get concerned about the time it takes to load the image for users on the west coast. What are the two best ways to speed up serving this image?

Use Route 53's Latency Based Routing and serve the image out of us-west-2 as well as us-east-1, Serve the image out through CloudFront

CS: 728. A newspaper organization has an on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEGs (approx. 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software is now end of life and the organization wants to migrate its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability.

Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. (Cost effective S3 storage, CloudSearch for Search and Highly available and durable web application)

SQS: A user has created photo editing software and hosted it on EC2. The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly.

Use SQS to make a scalable

AS: order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months, checked for consistency then dispatched to your manufacturing plant for production quality control packaging shipment and payment processing. If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure, Elastic Beanstalk for your website with an RDS MySQL

Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.

SES: implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status

Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.

SG: running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data while the backups are taking place.

Use Storage Gateway and configure it to use Gateway Stored volumes (Data is hosted on the On-premise server as well. The requirement for 140TB is for file server On-Premise more to confuse and not in AWS. Just need a backup solution hence stored instead of cached volumes)

ELB: App contains protected health information, must use encryption at rest and in transit, a three-tier architecture where data flows through the load balancer and is stored on Amazon EBS volumes for processing, and the results are stored in Amazon S3 using the AWS SDK,

Use TCP load balancing on the load balancer, SSL termination on the Amazon EC2 instances, OS-level disk encryption on the Amazon EBS volumes, and Amazon S3 with server-side encryption, Use SSL termination on the load balancer, an SSL listener on the Amazon EC2 instances, Amazon EBS encryption on EBS volumes containing PHI, and Amazon S3 with server-side encryption.

CF: You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template.

Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record only when <code>environment</code> is not <code>production</code>. (Best way to do this is with one template, and a Condition on the resource. Route53 does not allow null strings for Refer link)

run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory, because your organization is a power-user of Active Directory

Use a large AWS Directory Service AD Connector. (AD Connector can be used as power-user of Microsoft Active Directory. Simple AD only works with a subset of AD functionality)

EC2: to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

Use a placement group for your instances so the instances are physically near each other in the same Availability Zone. (You are not guaranteed 10 gigabit performance, except within a placement group. Using placement groups enables applications to participate in a low-latency, 10 Gbps network)

ELB: the product grows even more popular and you need additional capacity As a result, your company purchases two c3.2xlarge medium utilization RIs You register the two c3.2xlarge instances with your ELB and quickly find that the ml large instances are at 100% of capacity and the c3.2xlarge instances have significant capacity that's unused

Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin

CF: Anywhere in the world, your users can see local news on of topics they choose.

Use an Amazon Cloud Front distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

IAM: Move FTP servers with 250 customers to AWS, to upload and download large graphic files, scalable, cost low, maintain customer privacy

Use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the 'username' Policy variable.

SQS: You need to process long-running jobs once and only once

Use an SQS queue and set the visibility timeout to long enough for jobs to process

EC2: write throughput to the database needs to be increased

Use an array of EBS volumes (Striping to increase throughput)

EBS: 184. How can you secure data at rest on an EBS volume?

Use an encrypted file system on top of the EBS volume

SQS: You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?

Use as long of a poll as possible, instead of short polls

S3: Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse, optimize costs...

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability )

EMR: creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system, lower cost.

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)

EC2: regular analytics reports from your company's log files, log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse.

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability, as opposed to just spot for EMR)

EMR: a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2.8xlarge instance type, who's CPUs are mostly idle during processing.

Use smaller instances that have higher aggregate I/O performance

CW: EC2 instances and DynamoDB tables deployed to several AWS Regions. In order to monitor the performance of the application globally, you would like to see two graphs 1) Avg CPU Utilization across all EC2 instances and 2) Number of Throttled Requests for all DynamoDB tables.

Use the CloudWatch CLI tools to pull the respective metrics from each regional endpoint. Aggregate the data offline & store it for graphing in CloudWatch

EBS: Migrate legacy, The VM's single 10GB VMDK is almost full. The virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized. It is currently running on a highly customized Windows VM within a VMware environment: You do not have the installation media. This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?

Use the EC2 VM Import Connector for vCenter to import the VM into EC2

S3: A user wants to upload a complete folder to AWS S3 using the S3 Management console

Use the Enable Enhanced Uploader option from the S3 console while uploading objects (NOTE - Its no longer supported by AWS)

IAM: A new policy which will change the access of an IAM user

Use the IAM groups and add users as per their role to different groups and apply policy to group

CF: instantiate new tracking systems in any region without any manual intervention and therefore adopted AWS CloudFormation

Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource, Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.

SQS: Files submitted by your premium customers must be transformed with the highest priority

Use two SQS queues, one for high priority messages, and the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue

DDB: Which of the following is an example of a good DynamoDB hash key schema for provisioned throughput efficiency

User ID, where the application has many different users.

EC2: You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this

User data

Tags: find the separate cost for the production and development instances

User should use Cost Allocation Tags and AWS billing reports

IAM: two permission types used by AWS

User-based and Resource-based

AD: they want to make their internal Microsoft Active Directory available to any applications running on AWS

Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels; they could then have two domain controller instances that are joined to their existing domain and reside within different subnets, in different Availability Zones (highly available with 2 AZ's, secure with VPN connection and minimal changes)

Kinesis: 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met

Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR

S3: business model to support both free tier and premium tier users. The premium tier users will be allowed to store up to 200GB of data and free tier customers will be allowed to store only 5GB. The customer expects that billions of files will be stored. All users need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use.

Utilize the amazon simple workflow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholdsm (List operations on S3 not feasible, RDS but with so many objects )

VPC: public and private subnets using the VPC wizard

VPC bounds the main route table with a private subnet and a custom route table with a public subnet

EC2: trying to connect via SSH to a newly created Amazon EC2 instance and get one of the following error messages: "Network error: Connection timed out" or "Error connecting to instance], reason: -> Connection timed out: connect," You have confirmed that the network and security group rules are configured correctly and the instance is passing status checks

Verify that the private key file corresponds to the Amazon EC2 key pair assigned at launch, • Verify that you are connecting with the appropriate user name for your AMI

VPC: When attached to an Amazon VPC which two components provide connectivity with external networks

Virtual Private Gateway (VGW), Internet Gateway (IGW),

AD: needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company's requirements

Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces (WorkSpaces for Virtual desktops, and AWS Directory Services to authenticate to an existing on-premises AD through VPN)

CF: circular dependency in AWS CloudFormation

When Resources form a DependOn loop. (Refer link, to resolve a dependency error, add a DependsOn attribute to resources that depend on other resources in the template. Some cases for e.g. EIP and VPC with IGW where EIP depends on IGW need explicitly declaration for the resources to be created in correct order)

ELB: header received at the EC2 instance identifies the port used by the client while requesting ELB

X-Forwarded-Port

DX: Does AWS Direct Connect allow you access to all Availabilities Zones within a Region?

Yes

RDS: Can I encrypt connections between my application and my DB Instance using SSL

Yes

VPC: in vpc can you configure the security groups for these instances to only allow the ICMP ping to pass from the monitoring instance to the application instance and nothing else...

Yes, The security group for the monitoring instance needs to allow outbound ICMP and the application instance's security group needs to allow Inbound ICMP (is stateful, so just allow outbound ICMP from monitoring and inbound ICMP on monitored instance)

SQS: consumers of queue is down for 3 days and then becomes available

Yes, since SQS by default stores message for 4 days

EBS: A user is trying to create a PIOPS EBS volume with 8 GB size and 200 IOPS. Will AWS create the volume?

Yes, since the ratio between EBS and IOPS is less than 30

EC2: . A user has launched an EC2 instance from an instance store backed AMI. The user has attached an additional instance store volume to the instance. The user wants to create an AMI from the running instance. Will the AMI have the additional instance store volume data?

Yes, the block device mapping will have information about the additional instance store volume

EC2: some machines are failing to successfully download some software updates, but not all of their updates within the maintenance window The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances

You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time or you are running the proxy on a affluently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance

VPC: In regards to VPC

You can associate multiple subnets with the same Route Table

Lambda: Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

You did not request a limit increase on concurrent Lambda function executions. (AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.)

S3: an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business, they are expecting sudden and large increases to traffic to and from S3

You must find out total number of requests per second at peak usage (# of customers doesn't matter, Size does not relate to the key namespace design but the count does, S3 provided unlimited storage the key namespace design would depend on the number)

EC:database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%, high availability

You should deploy two Memcached ElastiCache Clusters in different AZs because the RDS Instance will not be able to handle the load if the cache node fails.

DDB: ProvisionedThroughputExceededException

You're exceeding your capacity on a particular Hash Key (Hash key determines the partition and hence the performance)

IAM: a group is

a collection of users

DDB: Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. "Return all Items in this office for names starting with A through E".

a range index on the name attribute, and a hash index on the office identifier

IAM: Groups can't

be nested at all

KMS: regulatory requirements that all data needs to be encrypted before being uploaded to the cloud.

c. Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier. (With CSE-KMS the encryption happens at client side before the object is upload to S3 and KMS is cost effective as well)

ELB: even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer

cross-zone load balancing

SWF: the coordination logic in a workflow is contained in a software program called

decider

RDS: My Read Replica appears "stuck" after a Multi-AZ failover and is unable to obtain or apply updates from the source DB Instance

delete the Read Replica and create a new one to replace it

CF: A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS CloudFormation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis.

deletion policy of type Retain or Snapshot

SQS: queue named "queue2" in US-East region with AWS SQS

http://sqs.us-east-1.amazonaws.com/123456789012/queue2

EBS: 200. Which EBS volume type is best for high performance NoSQL cluster deployments

io1 (io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.)

EC2: network throughput bottleneck on your m1.small EC2 instance when uploading data Into Amazon S3 In the same region

larger Instance

IAM: EC2 instances best practices

latest patch of OS, Disable the password-based login (use keys), revoke the access rights of the individual user when they are not required to connect to EC2.

CF: are required as a part of the template

resources

EBS: Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as low-cost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads.

standard (Standard or Magnetic volumes are suited for cold workloads where data is infrequently accessed, or scenarios where the lowest storage cost is important)

S3: user has not enabled versioning on an S3 bucket

version ID will be null

S3: When uploading an object, what request header can be explicitly specified in a request to Amazon S3 to encrypt object data when saved on the server side

x-amz-server-side-encryption

Kinesis: votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation

• Amazon DynamoDB

S3: 311. You are designing a personal document-archiving solution for your global enterprise with thousands of employee. Each employee has potentially gigabytes of data to be backed up in this archiving solution. The solution will be exposed to the employees as an application, where they can just drag and drop their files to the archiving system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS DirectConnect connectivity to AWS. You have regulatory requirements that all data needs to be encrypted before being uploaded to the cloud. How do you implement this in a highly available and cost efficient way

• Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier. (with CSE-KMS the encryption happens at client side before the object is upload to S3 and KMS is cost effective as well)

Kinesis: You require the ability to analyze a customer's clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through

• Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers

EBS vs Instance Store: provides the fastest storage medium

• SSD Instance (ephemeral) store (SSD Instance Storage provides 100,000 IOPS on some instance types, much faster than any network-attached storage)

EC: application currently uses multicast to share session state between web servers

• Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)

CF: You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable

• Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as query string GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. (CloudFront is a Gigabit-Scale HTTP(S) global request distribution service and works fine with peaks higher than 10 Gbps or 15,000 RPS. It can handle scale, geo-spread, spikes, and unpredictability. Access Logs will contain the GET data and work just fine for batch analysis and email using EMR. Other streaming options are expensive as not required as the need is to batch analyze)

CF: to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time

•=Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure


Kaugnay na mga set ng pag-aaral

5 Themes of Geography Examples & 5 Themes of Geography Definitions

View Set

CHEM 237 Lab Final Practice Questions

View Set

¿De dónde es? - Where is he/she from?

View Set

ch.2 chemistry part 2 (organic chemistry)

View Set

FINC 409 - MOD 9&10 Conceptual Questions

View Set

Разговорные фразы

View Set

AP Psychology Sensation and Perception

View Set

history of rock and roll final chapter 6

View Set

Chapter 04: Skin and Body Membranes

View Set