SA Pro - Test 3

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet with port 80 and a Database server in the private subnet with port 3306. The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). Which of the below-mentioned entries is required in the private subnet database security group DBSecGrp? A. Allow Inbound on port 3306 for the source Web Server Security Group WebSecGrp. B. Allow Inbound on port 3306 from source 20.0.0.0/16. C. Allow Outbound on port 3306 for destination Web Server Security Group WebSecGrp. D. Allow Outbound on port 80 for destination NAT instance IP.

A. Allow Inbound on port 3306 for the source Web Server Security Group WebSecGrp. Answer - A The important point in this question is to allow the incoming traffic to the private subnet on port 3306 only for the instances in the private subnet. Option A is CORRECT because (a) it allows the inbound traffic only for the required port 3306, and (b) it allows only the traffic from the instances in the public subnet (WebSecGrp).Option B is incorrect because it is allowing the inbound traffic to all the instances in the VPC which is not the requirement.Option C is incorrect because defining outbound traffic will not ensure the incoming traffic from the public subnet. Also, since the security groups are stateful, you just need to define the inbound traffic for the public subnet only (WebSecGrp). The outbound traffic would be automatically allowed.Option D is incorrect because you do not need to open the port 80 in this case. More information on Web Server and DB Server Security Group settings: Since the Web server needs to talk to the database server on port 3306 that means that the database server should allow incoming traffic on port 3306. The below table from the AWS documentation shows how the security groups should be set up. For more information on security groups please visit the below link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

A company has the requirement to analyze the clickstreams from a web application in real time? Which of the below AWS services will fulfill this requirement? A. Amazon Kinesis B. Amazon SQS C. Amazon Redshift D. AWS IoT

A. Amazon Kinesis Answer - A Kinesis Data Streams are extremely useful for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight. Option A is CORRECT because Amazon Kinesis Data Streams are very useful in processing website clickstreams in real time, and then analyzing using multiple different Kinesis Data Streams applications running in parallel. Option B is incorrect because SQS is used for storing messages/work items for asynchronous processing in the application, not the real time processing of clickstream data.Option C is incorrect because Redshift is a data warehouse solution that is used for Online Analytical Processing of data, and where complex analytic queries against petabytes of structured data. It is not used in real time processing of clickstream data.Option D is incorrect because AWS IoT is a platform that enables you to connect devices to AWS Services and other devices, secure data and interactions, process and act upon device data. It does not do the real time processing of the clickstream data. However, it can leverage Amazon Kinesis Analytics to do it. For more information on Kinesis , please visit the below link http://docs.aws.amazon.com/streams/latest/dev/introduction.html

A customer is deploying an SSL enabled Web application on AWS and would like to implement a separation of roles between the EC2 service administrators that are entitled to login to Instances as well making API calls and the security officers who will maintain and have exclusive access to the application's X.509 certificate that contains the private key. Which configuration option would satisfy the above requirements? A. Configure IAM policies authorizing access to the certificate store only to the security officer's and terminate SSL on the ELB. B. Configure system permissions on the web servers to restrict access to the certificate only to the authorized security officers. C. Upload the certificate on an S3 bucket owned by the security officers and accessible only by the EC2 role of the web servers D. Configure the web servers to retrieve the certificate upon boot from an CloudHSM that is managed by the security officers.

A. Configure IAM policies authorizing access to the certificate store only to the security officer's and terminate SSL on the ELB. Answer - A Option A is CORRECT because (a) only the security officers have access to the certificate store, and (b) the certificate is not stored on an EC2 instances, hence avoiding giving access to it to the EC2 service administrators.Option B is incorrect because it will still involve storing the certificate on the EC2 instances and additional configuration overhead to give access to the security officers which is unnecessary.Option C and D both are incorrect because giving EC2 instances the access to the certificate should be avoided. It is better to let ELB manage the SSL certificate, instead of the EC2 web servers. For more information please refer to the links given below:http://docs.aws.amazon.com/IAM/latest/APIReference/API_UploadServerCertificate.html https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/

An AWS customer is deploying a web application that is composed of a front end running on Amazon EC2 and confidential data that is stored on Amazon S3. The customer's Security policy requires that the all-access operations to this sensitive data must be authenticated and authorized by a centralized access management system that is operated by a separate security team. In addition, the web application team that owns and administers the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system. Which of the following configurations will support these requirements: A. Configure the web application to authenticate end users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3. B. Encrypt the data on Amazon S3 using a CloudHSM that is operated by the separate security team. Configure the web application to integrate with the CloudHSM for decrypting approved data access operations for trusted end users. C. Configure the web application to authenticate end users against the centralized access management system using SAML. Have the end users authenticate to IAM using their SAML token and download the approved data directly from Amazon S3. D. Have the separate security team create an IAM Role that is entitled to access the data on Amazon S3. Have the web application team provision their instances with this Role while denying their IAM users access to the data on Amazon S3.

A. Configure the web application to authenticate end users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3. Answer - A Option A is CORRECT because the access to the sensitive data on Amazon S3 is only given to the authenticated users. Option B is incorrect because S3 doesn't integrate directly with CloudHSM, also there is no centralized access management system control.Option C is incorrect because this is an incorrect workflow of use of SAML and it does not mention if the centralized access management system is SAML complaint.Option D is incorrect because with this configuration the web team would have access to the sensitive data on S3. For more information on STS, please refer to the URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

A public archives organization is about to move a pilot application they are running on AWS into production. You have been hired to analyze their application architecture and give cost-saving recommendations. The application displays scanned historical documents. Each document is split into individual image tiles at multiple zoom levels to improve responsiveness and ease of use for the end users. At maximum zoom level the average document will be 8000 X 6000 pixels in size, split into multiple 40px X 40px image tiles. The tiles are batch processed by Amazon Elastic Compute Cloud (EC2) instances and put into an Amazon Simple Storage Service (S3) bucket. A browser-based JavaScript viewer fetches tiles from the Amazon (S3) bucket and displays them to users as they zoom and pan around each document. The average storage size of all zoom levels for a document is approximately 30MB of JPEG tiles. Originals of each document are archived in Amazon Glacier. The company expects to process and host over 500,000 scanned documents in the first year. What are your recommendations?Choose 3 options from the below: A. Deploy an Amazon CloudFront distribution in front of the Amazon S3 tiles bucket. B. Increase the size (width/height) of the individual tiles at the maximum zoom level. C. Use Amazon S3 Reduced Redundancy Storage for each zoom level. D. Decrease the size (width/height) of the individual tiles at the maximum zoom level. E. Store the maximum zoom level in the low cost Amazon S3 Glacier option and only retrieve the most frequently access tiles as they are requested by users.

A. Deploy an Amazon CloudFront distribution in front of the Amazon S3 tiles bucket. B. Increase the size (width/height) of the individual tiles at the maximum zoom level. C. Use Amazon S3 Reduced Redundancy Storage for each zoom level. Answer - A, B, and C Option A is CORRECT because the caching is done by CloudFront via the edge locations which reduces the load on the origin.Option B is CORRECT because increasing the size of the images would help reduce the cost of number of GET/PUT requests on the origin server.Option C is CORRECT because RRS is a low cost storage option and will help keeping the overall cost low.Option D is incorrect because decreasing the size would require more requests and will increase the overall cost.Option E is incorrect because Glacier is an archival solution and will not be suitable for frequent access of the tiles.

A user has created a VPC with CIDR 20.0.0.0/16 using the wizard. The user has created a public subnet CIDR (20.0.0.0/24) and VPN only subnets CIDR (20.0.1.0/24) along with the VPN gateway (vgw-12345) to connect to the user's data center. The user's data center has CIDR 172.28.0.0/12. The user has also setup a NAT instance (i-12345) to allow traffic to the internet from the VPN subnet. Which of the below-mentioned options is not a valid entry for the main route table in this scenario? A. Destination: 20.0.1.0/24 and Target: i-12345 B. Destination: 0.0.0.0/0 and Target: i-12345 C. Destination: 172.28.0.0/12 and Target: vgw-12345 D. Destination: 20.0.0.0/16 and Target: local

A. Destination: 20.0.1.0/24 and Target: i-12345 Answer - A Option A is CORRECT because the destination of private subnet with NAT instance as target is not needed in the route table. This is an invalid entry.Option B is incorrect because you would need this entry to be able to communicate with the internet via NAT instance (e.g. for patch updates).Option C is incorrect because you need this entry for communicating with customer network via the virtual private gateway.Option D is incorrect because this entry is present by default to allow the resources in the VPC to communicate with each other. The only routing option which should have access to the internet gateway should be the 0.0.0.0/0 address. So Option A is the right answer. For more information on VPC with the option of VPN, please visit the link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html

While hosting a static website with Amazon S3, your static JavaScript code attempts to include resources from another S3 bucket but permission is denied. How might you solve the problem? Choose the correct option from the below: A. Enable CORS Configuration B. Disable Public Object Permissions C. Move the object to the main bucket D. None of the above

A. Enable CORS Configuration Answer - A Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. For more information on S3 CORS configuration, please visit the link http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

Which of the following HTTP methods are supported by Amazon CloudFront? Choose 3 options from the below: A. GET B. POST C. DELETE D. UPDATE

A. GET B. POST C. DELETE Answer - A, B, and C Amazon CloudFront supports the following HTTP methods: GET, HEAD, POST, PUT, DELETE, OPTIONS, and PATCH. This means you can improve the performance of dynamic websites that have web forms, comment, and login boxes, "add to cart" buttons or other features that upload data from end users. For more information on CloudFront Dynamic content, please refer to the below URL: https://aws.amazon.com/cloudfront/dynamic-content/

A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 in this VPC. The user is trying to create another subnet with the same VPC for CIDR 20.0.0.1/24. What will happen in this scenario? A. It will throw a CIDR overlap error B. It is not possible to create a subnet with the same CIDR as the VPC C. The second subnet will be created D. The VPC will modify the first subnet to allow this IP range

A. It will throw a CIDR overlap error Answer - A Since the CIDR of the new subnet overlaps with that of the first subnet, an overlap error will be displayed. See the snapshot below: For more information on VPC subnets, please refer to the below link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

A company has hired you to assist with the migration of an interactive website that allows registered users to rate local restaurants. Updates to the ratings are displayed on the home page and ratings are updated in real time. Although the website is not very popular today, the company anticipates that it will grow over the next few weeks. They also want to ensure that the website to remain highly available. The current architecture consists of a single Windows server 2008R2 web server and a MySQL database on Linux. Both reside inside on an on-premise hypervisor. What would be the most efficient way to transfer the application to AWS, ensuring high performance and availability? A. Launch one Windows Server 2008 R2 instance in us-west-1b and one in us-west-1a and configure auto-scaling. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi -AZ MySQL Amazon RDS instance in us-west-1a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer (ELB) to front your web servers. Use Route 53 and create an alias record pointing to the ELB. B. Export web files to an Amazon 53 bucket in us-west-1. Run the website directly out of Amazon 53. Launch a multi-AZ MySQL Amazon RDS instance in us-west-1a. Import the data into Amazon RDS from the latest MySQL backup. Use Route 53 and create an alias record pointing to the elastic load balancer. C. Use AWS VM Import/Export to create an Amazon EC2 AMI of the web server. Configure auto-scaling to launch one web server in us-west-1a and one in us-west-1b. Launch a multi-AZ MySQL Amazon RDS instance in us-west-1a. Import the data Into Amazon RDS from the latest MySQL backup. Create an elastic load balancer (ELB) in front of your web servers. Use Amazon Route 53 and create an A record pointing to the ELB. D. Use AWS VM Import/Export to create an Amazon EC2 AMI of the web server. Configure auto-scaling to launch one web server in us-west-1a and one in us-west-1b. Launch a Multi-AZ MySQL Amazon RDS instance in us-west-1b. Import the data into Amazon RDS from the latest MySQL backup. Use Amazon Route 53 to create a hosted zone and point an A record to the elastic load balancer.

A. Launch one Windows Server 2008 R2 instance in us-west-1b and one in us-west-1a and configure auto-scaling. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi -AZ MySQL Amazon RDS instance in us-west-1a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer (ELB) to front your web servers. Use Route 53 and create an alias record pointing to the ELB. Answer - A The main consideration in the question is that the architecture should be highly available with high performance. Option A is CORRECT because (a) EC2 servers can communicate with S3 for the web files, and (b) auto-scaling of web servers and the setup of Multi-AZ RDS instance as well as the Route 53 alias record with ELB provides high availability.Option B is incorrect because this is an interactive website and S3 is suitable for static website.Option C is incorrect because Route 53 should create an Alias Record, not A record.Option D is incorrect because, even though it tries to set up the ELB with Route 53 record set, it actually does not create an ELB. For more information, please refer to the below URL http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

You are moving an existing traditional system to AWS. During migration, you discover that the master server is the single point of failure. Having examined the implementation of the master server you realize that there is not enough time during migration to re-engineer it to be highly available. You also discover that it stores its state in local MySQL database. In order to minimize downtime, you select RDS to replace the local database and configure the master to use it. What steps would best allow you to create a self-healing architecture? A. Migrate the local database into Multi-AZ database. Place the master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks. B. Migrate the local database into Multi-AZ database. Place the master node into a Cross Zone ELB with a minimum of one and maximum of one with health checks C. Replicate the local database into a RDS Read Replica. Place the master node into a Cross Zone ELB with a minimum of one and maximum of one with health checks D. Replicate the local database into a RDS Read Replica.Place the master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks.

A. Migrate the local database into Multi-AZ database. Place the master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks. Answer - A Option A is CORRECT because (i) for database, Multi-AZ architecture provides high availability and can meet shortest of RTO and RPO requirements in case of failures, since it uses synchronous replication and maintains standby instance which gets promoted to primary, and (ii) for master server, it uses auto scaling which ensures that at least one server is always running.Option B is incorrect because ELB cannot ensure the minimum or maximum number of instances running.Option C is incorrect because (i) read replicas do not provide high availability, and (ii) ELB cannot ensure the minimum or maximum number of instances running.Option D is incorrect because read replicas do not provide high availability. More information on Multi-AZ RDS architecture:Multi-AZ is used for highly available architecture. If a failover happens, the secondary DB which is a synchronous replica will have the data, and it's just the CNAME which changes. For Read replica, it's primarily used for distributing workloads. For more information on Multi-AZ RDS, please refer to the below link https://aws.amazon.com/rds/details/multi-az/

Which of the following benefits does adding Multi-AZ deployment in RDS provide? Choose 2 answers from the options given below: A. Multi-AZ deployed database can tolerate an Availability Zone failure. B. Decrease latencies if app servers accessing database are in multiple Availability zones. C. Make database access times faster for all app servers. D. Make database more available during maintenance tasks.

A. Multi-AZ deployed database can tolerate an Availability Zone failure. D. Make database more available during maintenance tasks. Answer - A and D Option A is CORRECT because in Multi-AZ deployment, if an availability zone (AZ) goes down, the automatic failover occurs and the DB instance CNAME gets pointed to the synchronously updated secondary instance in another AZ.Option B is incorrect because Multi-AZ deployment does not affect the latency of the application's DB access.Option C is incorrect because DB access time does not get affected by Multi-AZ deployment.Option D is CORRECT because during the maintenance tasks, the DB instance CNAME can point to the secondary instance in another AZ to carry out the DB tasks. Some of the advantages of Multi-AZ rds deployments are given below If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete. The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. For more information on Multi-AZ rds deployments please visit the link https://aws.amazon.com/rds/details/multi-az/

A large enterprise wants to adopt Cloud Formation to automate administrative tasks and implement the security principles of least privilege and separation of duties. They have identified the following roles with the corresponding tasks in the company: Network administrators: create, modify and delete VPCs, subnets, NACLs, routing tables and security groups. Application operators: deploy complete application stacks (ELB, Auto-Scaling groups, RDS) whereas all resources must be deployed in the VPCs managed by the network administrators. Both groups must maintain their own Cloud Formation templates and should be able to create, update and delete only their own Cloud Formation stacks. The company has followed your advice to create two IAM groups, one for applications and one for networks. Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update, and deletion of Cloud Formation stacks. Given setup and requirements, which statements represent valid design considerations? Choose 2 options from the below: A. Network stack updates will fail upon attempts to delete a subnet with EC2 instances. B. Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group. C. Nesting network stacks within application stacks simplifies management and debugging, but requires resource level permissions in the IAM policy of the network group. D. The application stack cannot be deleted before all network stacks are deleted. E. Unless resource level permissions are used on the cloud formation: Delete Stack action, network administrators could tear down application stacks.

A. Network stack updates will fail upon attempts to delete a subnet with EC2 instances. B. Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group. Answer - A and B Option A is CORRECT because subnets cannot be deleted with instances in them.Option B is CORRECT because to explicitly launch instances, we need IAM permissions.Option C is incorrect because even though stacks can be nested, Network group needs all the application group permissions.Option D is incorrect because application stack can be deleted before network stack.Option E is incorrect because network administrators have no rights to delete application stack. For more information, please visit the below URL: https://aws.amazon.com/blogs/devops/aws-cloudformation-security-best-practices/

An Amazon Redshift cluster with four nodes is running 24/7/365 and expects potentially to add one on-demand node for one to two days once during the year. Which architecture would have the lowest possible cost for the cluster requirement?Choose the correct answer from the below options: A. Purchase 4 reserved nodes and rely on on-demand instances for the fifth node, if required. B. Purchase 5 reserved nodes to cover all possible usage during the year. C. Purchase 4 reserved nodes and bid on spot instances for the extra node if required. D. Purchase 2 reserved nodes and utilize 3 on-demand nodes only for peak usage times.

A. Purchase 4 reserved nodes and rely on on-demand instances for the fifth node, if required. Answer - A Option A is CORRECT because (a) the application requires 4 nodes throughout the year and reserved instances would save the cost, and (b) since the need of the other node is not assured, on-demand instance(s) can be purchased if and when needed.Option B is incorrect because reserving 5th node is unnecessary. Option C is incorrect because, even though the spot instances are cheaper than on-demand instances, they should only be used if the application is tolerant of sudden termination of them. Since the question does not mention this, purchasing spot instance(s) may not be a good option.Option D is incorrect because reserving only 2 instances would not be sufficient. Please find the below link for Reserved Instances: https://aws.amazon.com/ec2/pricing/reserved-instances/

Which of the following are Lifecycle events available in OpsWorks? Choose 3 options from the below: A. Setup B. Decommission C. Deploy D. Shutdown

A. Setup C. Deploy D. Shutdown Answer - A, C, and D Below is a snapshot of the Lifecycle events in OpsWorks. For more information on Lifecycle events, please refer to the below URL: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html

A company has setup a Direct Connect connection between their on-premise location and their AWS VPC. They want to setup redundancy in case the Direct Connect connection fails. What can they do in this regard? Choose 2 options from the below: A. Setup another Direct Connect connection. B. Setup an IPSec VPN Connection. C. Setup S3 connection. D. Setup a connection via EC2 instances.

A. Setup another Direct Connect connection. B. Setup an IPSec VPN Connection. Answer - A and B Option A and B are CORRECT because with A, you can have a redundant Direct Connect setup as a backup if the main Direct Connect connection fails (even though it is an expensive solution, it will work), and with B, VPN is an alternate way for the connection between AWS and on-premises infrastructure (even though it is a slower connectivity, it will work). More information on Direct Connect: If you have established a second AWS Direct Connect connection, traffic will failover to the second link automatically. We recommend enabling Bidirectional Forwarding Detection (BFD) when configuring your connections to ensure fast detection and failover. If you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the VPN connection automatically. Traffic to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or an IPSec VPN link, then Amazon VPC traffic will be dropped in the event of a failure. Traffic to/from public resources will be routed over the Internet. For more information on Direct Connect FAQ's, please visit the below URL: https://aws.amazon.com/directconnect/faqs/

Which of the following can be done by Auto scaling? Choose 2 answers from the options given below: A. Start up EC2 instances when CPU utilization is above threshold. B. Release EC2 instances when CPU utilization is below threshold. C. Increase the instance size when utilization is above threshold. D. Decrease the instance size when utilization is below threshold.

A. Start up EC2 instances when CPU utilization is above threshold. B. Release EC2 instances when CPU utilization is below threshold. Answer - A and B Option A and B are CORRECT because AutoScaling can start or terminate instances based on CPU utilization.Option C and D are incorrect because AutoScaling cannot increase or decrease the instance size based on CPU utilization. It will launch the instances based on the launch configuration. As per the AWS documentation, below is what can be done with Auto Scaling. You can only scale horizontally and not vertically. Scale-out Amazon EC2 instances seamlessly and automatically when demand increases. Shed unneeded Amazon EC2 instances automatically and save money when demand subsides. Scale dynamically based on your Amazon CloudWatch metrics, or predictably according to a schedule that you define. Replace unhealthy or unreachable instances to maintain the higher availability of your applications. Receive notifications via Amazon Simple Notification Service (Amazon SNS) to be alerted when you use Amazon CloudWatch alarms to initiate Auto Scaling actions, or when Auto Scaling completes an action. Run On-Demand or Spot Instances, including those inside your virtual private cloud (VPC) or high performance computing (HPC) clusters. If you're signed up for the Amazon EC2 service, you're already registered to use Auto Scaling and can begin using the feature via the API or command line interface. For more information on Auto scaling please visit the link https://aws.amazon.com/autoscaling/

What are some of the common types of content that are supported by a web distribution via CloudFront? Choose 3 options from the below: A. Static content B. Live events C. Multimedia content D. Peer to peer networking

A. Static content B. Live events C. Multimedia content Answer - A, B, and C You can use web distributions to serve the following content over HTTP or HTTPS: Static and dynamic download content, for example, .html, .css, .php, and image files, using HTTP or HTTPS. Multimedia content on demand using progressive download and Apple HTTP Live Streaming (HLS). A live event, such as a meeting, conference, or concert, in real time. For live streaming, you create the distribution automatically by using an AWS CloudFormation Stack. Hence, options A, B, and C are CORRECT. For more information on CloudFront distribution, please refer to the below URL: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-overview.html

You are maintaining an application that is spread across multiple web servers and has incoming traffic balanced by ELB. The application allows users to upload pictures. Currently, each web server stores the image and a background task synchronizes the data between servers. However, the synchronization task can no longer keep up with the number of images uploaded. What change could you make so that all web servers have a place to store and read images at the same time? Choose an answer from the below options: A. Store the images in Amazon S3. B. Store the images on Amazon CloudFront. C. Store the images on Amazon EBS. D. Store the images on the ELB.

A. Store the images in Amazon S3. Answer - A Option A is CORRECT because S3 provides a durable, secure, cost effective, and highly available storage service for the uploaded pictures.Option B is incorrect because the application needs just a storage solution, not a global content distribution service. CloudFront is also costlier solution compared to S3.Option C is incorrect because you cannot share EBS volumes among multiple EC2 instances.Option D is incorrect because ELB cannot be used as a storage service. For more information on AWS S3, please refer to the below URL: http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

Which of the following is an example of a good Amazon DynamoDB hash key schema for provisioned throughput efficiency?Choose an answer from the below options: A. Student ID where every student has a unique ID. B. College ID where there are two colleges in the university. C. Class ID where every student is in one of the four classes. D. Tuition Plan where the vast majority of students are in state and the rest are out of state.

A. Student ID where every student has a unique ID. Answer - A Option A is CORRECT because DynamoDB stores and retrieves each item based on the primary key (hash key) value which must be unique. Every student would surely have Student ID, hence, the data would be partitioned for each ID, which will make the data retrieval efficient.Option B is incorrect because the data should spread evenly across all partitions for best throughput. With only two colleges, there would be only two partitions. This will not be as efficient as making Student ID the hash key. Option C is incorrect because partitioning on Class ID will not be as efficient as doing so on the Student ID.Option D is incorrect because there are only two possible options: in-state and out-of-state. This will not be as efficient as making Student ID the hash key. For more information on DynamoDB tables, please visit the URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html

A company has an application that is hosted on an EC2 instance. The code is written in .NET and connects to a MySQL RDS database. If you're executing .NET code against AWS on an EC2 instance that is assigned an IAM role, which of the following is a true statement? Choose the correct option from the below: A. The code will assume the same permissions as the EC2 role B. The code must have AWS access keys in order to execute C. Only .NET code can assume IAM roles D. None of the above

A. The code will assume the same permissions as the EC2 role Answer - A The best practice for IAM is to create roles which have specific access to an AWS service and then give the user permission to the AWS service via the role. To get the role in place, follow the below steps Step 1) Create a role which has the required ELB access Step 2) You need to provide permissions to the underlying EC2 instances in the Elastic Load Balancer For the best practices on IAM policies, please visit the link http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

A document storage company is deploying their application to AWS and changing their business model to support both Free Tier and Premium Tier users. The premium Tier users will be allowed to store up to 200GB of data and Free Tier customers will be allowed to store only 5GB. The customer expects that billions of files will be stored. All users need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use. To support the Free Tier and Premium Tier users, how should they architect their application? A. The company should utilize an Amazon Simple Workflow Service activity worker that updates the user's used data counter in Amazon DynamoDB. The Activity Worker will use Simple Email Service to send an email if the counter increases above the appropriate thresholds. B. The company should deploy an Amazon Relational Database Service (RDS) relational database with a stored objects table that has a row for each stored object along with the size of each object. The upload server will query the aggregate consumption of the user in question (by first determining the files stored by the user, and then querying the stored objects table for respective file sizes) and send an email via Amazon Simple Email Service if the thresholds are breached. C. The company should write both the content length and the username of the files owner as S3 metadata for the object. They should then create a file watcher to iterate over each object and aggregate the size for each user and send a notification via Amazon Simple Queue Service to an emailing service if the storage threshold is exceeded. D. The company should create two separate Amazon Simple Storage Service buckets, one for date storage for Free Tier Users, and another for data storage for Premium Tier users. An Amazon Simple Workflow Service activity worker will query all objects for a given user based on the bucket the data is stored in and aggregate storage. The activity worker will notify the user via Amazon Simple Notification Service when necessary.

A. The company should utilize an Amazon Simple Workflow Service activity worker that updates the user's used data counter in Amazon DynamoDB. The Activity Worker will use Simple Email Service to send an email if the counter increases above the appropriate thresholds. Answer - A Option A is CORRECT because DynamoDB which is highly scalable service is best suitable in this scenario. Option B is incorrect because RDS would not be a suitable solution for storing billions of files.Option C and D are both incorrect because it uses object level storage and iterating over billions of objects for each operation is performance-wise not a good option at all.

You are having trouble maintaining session states on some of your applications that are using an Elastic Load Balancer(ELB). There does not seem to be an even distribution of sessions across your ELB. Which of the following is the recommended method by AWS to try and rectify the issues to overcome this problem that you are having? Choose the correct option from the below: A. Use ElastiCache, which is a web service that makes it easy to set up, manage, and scale a distributed in-memory cache environment in the cloud. B. Use a special cookie to track the instance for each request to each listener. When the load balancer receives a request, it will then check to see if this cookie is present in the request. C. Use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance. D. If your application does not have its own session cookie, then you can configure Elastic Load Balancing to create a session cookie by specifying your own stickiness duration.

A. Use ElastiCache, which is a web service that makes it easy to set up, manage, and scale a distributed in-memory cache environment in the cloud. Answer - A Option A is CORRECT because ElastiCache can be utilized to store the session state in cache rather than in any database. It also improves the performance by allowing you to quickly retrieve the session state information.Option B and D are incorrect because the cookies will only help identifying the instance which would be tied to the request. It will not store any session state.Option C is incorrect because sticky session allows the ELB to bind the user session to a particular instance, but it will not store any session state. More information on Amazon ElastiCacheAmazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring, and operation of in-memory environments, enabling your engineering resources to focus on developing applications. Using Amazon ElastiCache, you can not only improve load and response times to user actions and queries but also reduce the cost associated with scaling web applications. As an example for application session stickiness using Elastic cache, please refer to the below link https://aws.amazon.com/blogs/developer/elasticache-as-an-asp-net-session-store/

If you want to deliver private content to users from an S3 bucket, which of the below options is the most feasible to fulfill this requirement?Choose an option from the below: A. Use pre-signed URL B. Use EC2 to deliver content from the S3 bucket C. Use SQS to deliver content from the S3 bucket D. None of the above

A. Use pre-signed URL Answer - A Option A is CORRECT because a pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the creator of the pre-signed URL has the necessary permissions to upload that object. Option B, C, and D are all incorrect. For more information on pre-signed URLs, please refer to the below URL http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve the performance, you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability, or the application with the anticipated additional load and why? A. Yes. You should deploy two Memcached ElastiCache Clusters in different AZs, because the RDS Instance will not be able to handle the load if the cache node fails. B. No. If the cache node fails, the automated ElastiCache node recovery feature will prevent any availability impact. C. Yes. You should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails. D. No. If the cache node fails you can always get the same data from the DB without having any availability impact.

A. Yes. You should deploy two Memcached ElastiCache Clusters in different AZs, because the RDS Instance will not be able to handle the load if the cache node fails. Answer - A Option A is CORRECT because having two clusters in different AZs provide high availability of the cache nodes which removes the single point of failure. It will help caching the data; hence, reducing the overload on the database, maintaining the availability and reducing the impact.Option B is incorrect because, even though AWS will automatically recover the failed node, there are no other nodes in the cluster once the failure happens. So, the data from the cluster would be lost once that single node fails. For higher availability, there should be multiple nodes. Also, once the cache node fails all the cached read load will go to the database which will not be able to handle the load with 30% increase to current levels. This means there will be availability impact.Option C is incorrect because provisioning the nodes in the same AZ does not provide the tolerance for an AZ failure. For higher availability, the nodes should be spread across multiple AZs.Option D is incorrect because the very purpose of the cache node was to reduce the impact on the database by not overloading it. If the cache node fails, the database will not be able to handle the 30% increase in the load; so, it will have an availability impact. More information on this topic from AWS Documentation: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/BestPractices.html Mitigating Node FailuresTo mitigate the impact of a node failure, spread your cached data over more nodes. Because Memcached does not support replication, a node failure will always result in some data loss from your cluster.When you create your Memcached cluster you can create it with 1 to 20 nodes, or more by special request. Partitioning your data across a greater number of nodes means you'll lose less data if a node fails. For example, if you partition your data across 10 nodes, any single node stores approximately 10% of your cached data. In this case, a node failure loses approximately 10% of your cache which needs to be replaced when a replacement node is created and provisioned. Mitigating Availability Zone FailuresTo mitigate the impact of an availability zone failure, locate your nodes in as many availability zones as possible. In the unlikely event of an AZ failure, you will lose only the data cached in that AZ, not the data cached in the other AZs.

Company B has created an e-commerce site using DynamoDB and is designing a table named Products that includes items purchased and the users who purchased them. When creating a primary key on this table which of the following would be the best attribute? Select the best possible answer: A. user_id where there are many users to few products B. product_id where there are few products to many users C. category_id where there are few categories to many products D. None of the above

A. user_id where there are many users to few products Answer - A When defining primary keys, you should always use the "many to few principle". Hence, option A is the best answer. For more information on DynamoDB, please visit the link https://aws.amazon.com/dynamodb/faqs/

A customer is running an application in the US-West region and wants to set up disaster recovery failover to Singapore region. The customer is interested in achieving a low RPO for an RDS multi-AZ DB instance. Which approach is best suited to this need? A. Synchronous replication B. Asynchronous replication C. Route53 health checks D. Copying of RDS incremental snapshots

B. Asynchronous replication Answer - B When you have cross-region replication for RDS, this is done Asynchronously. Having Synchronous replication would be too much of an overhead for a cross-region replication. Please refer to a blog article for cross-region replication for MySQL https://aws.amazon.com/blogs/aws/cross-region-read-replicas-for-amazon-rds-for-mysql/

You're migrating an existing application to the AWS cloud. The application will be primarily using EC2 instances. This application needs to be built with the highest availability architecture available. The application currently relies on hardcoded hostnames for intercommunication between the three tiers. You've migrated the application and configured the multi-tiers using the internal Elastic Load Balancer for serving the traffic. The load balancer hostname is demo-app.us-east-1.elb.amazonaws.com. The current hard-coded hostname in your application used to communicate between your multi-tier application is demolayer.example.com. What is the best method for architecting this setup to have as much high availability as possible? Choose the correct answer from the below options: A. Create an environment variable passed to the EC2 instances using user-data with the ELB hostname, demo-app.us-east-1.elb.amazonaws.com. B. Create a private resource record set using Route 53 with a hostname of demolayer.example.com and an alias record to demo-app.us-east-1.elb.amazonaws.com. C. Create a public resource record set using Route 53 with a hostname of demolayer.example.com and an alias record to demo-app.us-east-1.elb.amazonaws.com. D. Add a cname record to the existing on-premise DNS server with a value of demo-app.us-east-1.elb.amazonaws.com. Create a public resource record set using Route 53 with a hostname of applayer.example.com and an alias record to demo-app.us-east-1.elb.amazonaws.com.

B. Create a private resource record set using Route 53 with a hostname of demolayer.example.com and an alias record to demo-app.us-east-1.elb.amazonaws.com. Answer - B Since demolayer.example.com is an internal DNS record, the best way is Route 53 to create an internal resource record. One can then point the resource record to the create ELB. While ordinary Amazon Route 53 resource record sets are standard DNS resource record sets, alias resource record sets provide an Amazon Route 53-specific extension to DNS functionality. Instead of an IP address or a domain name, an alias resource record set contains a pointer to a CloudFront distribution, an Elastic Beanstalk environment, an ELB Classic or Application Load Balancer, an Amazon S3 bucket that is configured as a static website, or another Amazon Route 53 resource record set in the same hosted zone. Option A is incorrect because it does not mention how the mapping between the existing hard-coded host name and the ELB host name. Option B is CORRECT because it creates an internal ALIAS record set where it defines the mapping between the hard-coded host name and the ELB host name that is to be used. Option C and D are incorrect because it should create a private record set, not public, since the mapping between the hard-coded host name and ELB host name should be done internally. For more information on alias and non-alias records please refer to the below link http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

A user has launched a large EBS backed EC2 instance in the US-East-1a region. The user wants to achieve Disaster Recovery (DR) for that instance by creating another small instance in Europe. How can the user achieve DR? A. Copy the running instance using the "Instance Copy" command to the EU region. B. Create an AMI of the instance and copy the AMI to the EU region. Then launch the instance from the EU AMI. C. Copy the instance from the US East region to the EU region. D. Use the "Launch more like this" option to copy the instance from one region to another.

B. Create an AMI of the instance and copy the AMI to the EU region. Then launch the instance from the EU AMI. Answer - B Option A and C are incorrect because you cannot directly copy the instance. You need to create AMI of each instance.Option B is CORRECT because if you need an AMI across multiple regions, then you have to copy the AMI across regions. Note that by default AMI's that you have created will not be available across all regions.Option D is incorrect because using "Launch More Like This..." enables you to use a current instance as a base for launching other instances in the same availability zone. It does not clone your selected instance; it only replicates some configuration details. To create a copy of your instance, first create an AMI from it, then launch more instances from the AMI. For the entire details to copy AMI's, please visit the link - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

What are the steps that get carried out by OpsWork when you attach a load balancer to a layer in OpsWork? Choose 3 options from the below: A. Terminates the EC2 Instances. B. Deregisters any currently registered instances. C. Automatically registers the layer's instance's when they come online and deregisters instances when they leave the online state, including load-based and time-based instances. D. Automatically activates and deactivates the instances' Availability Zones.

B. Deregisters any currently registered instances. C. Automatically registers the layer's instance's when they come online and deregisters instances when they leave the online state, including load-based and time-based instances. D. Automatically activates and deactivates the instances' Availability Zones. Answer- B, C, and D For the exam remember that, after you attach a load balancer to a layer, AWS OpsWorks Stacks does the following:Deregisters any currently registered instances. Automatically registers the layer's instance's when they come online and deregisters instances when they leave the online state, including load-based and time-based instances.Automatically activates and deactivates the instances' Availability Zones. Hence, options B, C, and D are CORRECT. For more information on working with Opswork layer ELB's, please refer to the below link http://docs.aws.amazon.com/opsworks/latest/userguide/layers-elb.html

You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS. In addition, the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives? A. Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Provision 3×1 TB EBS volumes attach them to the instance and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral backed volume to the EBS-backed volume. B. Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous block-level replication to an identically configured instance in us-east-1b. C. Instantiate a c3.8xlarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOPS. Attach the volume to the instance. D. Instantiate a c3.8xlarge instance in us-east-1 provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume Ensure that EBS snapshots are performed every 15 minutes. E. Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and configure them as a single RAID 0 volume. Ensure that EBS snapshots are performed every 15 minutes.

B. Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous block-level replication to an identically configured instance in us-east-1b. Answer - B Option A is incorrect because this configuration is done entirely in a single AZ. There will be a data loss if the entire AZ goes down.Option B is CORRECT because (a) it uses RAID 0 configuration that utilizes all the volumes and gives the aggregated IOPS performance, and (b) the replication across another AZ gives higher availability and fault tolerance even in case of an entire AZ becomes unavailable.Option C is incorrect because it uses asynchronous backup of the data. The problem scenario demands a synchronous replication to prevent any data loss.Option D is incorrect because, RAID 5 is not recommended for Amazon EBS since the parity write operations consume some of the IOPS available to the volumes. See the link in the "More information" section.Option E is incorrect because, even if the snapshots are taken every 15 minutes, there are chances that there will be data loss during this time. The requirement is that there should be absolutely no data loss.

A user has created a public subnet with VPC and launched an EC2 instance within it. The user is trying to delete the subnet. What will happen in this scenario? A. It will delete the subnet and make the EC2 instance as a part of the default subnet. B. It will not allow the user to delete the subnet until the instances are terminated. C. It will delete the subnet as well as terminate the instances. D. The subnet can never be deleted independently, but the user has to delete the VPC first.

B. It will not allow the user to delete the subnet until the instances are terminated. Answer - B In AWS, when you try to delete a subnet which has instances it will not allow to delete it. The below error message will be shown when u try to delete a subnet with instances. Hence, option B is the CORRECT answer. For more information on VPC and subnets please visit the link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

You are designing security inside your VPC. You are considering the options for establishing separate security zones, and enforcing network traffic rules across the different zones to limit which instances can communicate. How would you accomplish these requirements? Choose 2 options from the below: A. Configure a security group for every zone. Configure a default allow all rule. Configure explicit deny rules for the zones that shouldn't be able to communicate with one another. B. NACLs to explicitly allow or deny communication between the different IP address ranges, as required for inter zone communication. C. Configure multiple subnets in your VPC, one for each zone. Configure routing within your VPC in such a way that each subnet only has routes to other subnets with which it needs to communicate, and doesn't have routes to subnets with which it shouldn't be able to communicate. D. Configure a security group for every zone. Configure allow rules only between zones that need to be able to communicate with one another. Use the implicit deny all rule to block any other traffic.

B. NACLs to explicitly allow or deny communication between the different IP address ranges, as required for inter zone communication. D. Configure a security group for every zone. Configure allow rules only between zones that need to be able to communicate with one another. Use the implicit deny all rule to block any other traffic. Answer - B and D Option A is incorrect because you cannot set up explicit deny rules in the Security Groups.Option B is CORRECT because you can explicitly allow or deny traffic based on certain IP address range.Option C is incorrect because you cannot delete the main route table, but you can replace the main route table with a custom table that you've created (so that this table is the default table each new subnet is associated with).Option D is CORRECT because Security Group in this case would act like a Firewall that provides security and control at the port/protocol level, and have "implicit deny all" rule and only allow what is needed. For more information on VPC and subnets, please visit the below URL: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

A company is in the evaluation phase of deploying a Redshift cluster. Which of the following types of instances should the company think of deploying for their Redshift cluster during this phase? Choose an answer from the options given below: A. Reserved instances because they are cost effective B. On-Demand C. Spot Instances because they are the least cost option D. Combination of all 3 types of instances

B. On-Demand Answer - B Option A is incorrect because if the instances are reserved, the company would be in a contract for paying for the instances irrespective whether they utilize all the instances or only some of them.Option B is CORRECT because in the evaluation phase of your project or when you're developing a proof of concept, on-demand pricing gives you the flexibility to pay as you go, to pay only for what you use, and to stop paying at any time by shutting down or deleting clusters. After you have established the needs of your production environment and begin the implementation phase, you may consider reserving compute nodes by purchasing one or more offerings.Option C is incorrect because even though the spot instances are cheapest, they involve in risk of shutting down with a very short notice and the price depends upon their current availability. Also, spot instances are recommended only if the application is tolerant of interruptions.Option D is incorrect because as mentioned above, the reserved instances may not be the best choice since there is no mention of the duration of the evaluation period, and the spot instances cannot be used since there is no mention of the company or its application being tolerant of the interruption risk that are associated with purchasing the spot instances. For more information on the type of instances to choose for the Redshift cluster please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/mgmt/purchase-reserved-node-instance.html

A user is accessing RDS from an application. The user has enabled the Multi-AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application? A. RDS will have an internal IP which will redirect all requests to the new DB B. RDS uses DNS to switchover to standby replica for seamless transition C. The switch over changes hardware so RDS does not need to worry about access D. RDS will have both the DBs running independently and the user has to manually switch over

B. RDS uses DNS to switchover to standby replica for seamless transition Answer - B Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby by changing the CNAME for the DB instance to point to the standby, so that you can resume database operations as soon as the failover is complete. Option A is incorrect because there is no internal IP that is maintained by RDS.Option B is CORRECT because, as mentioned above, RDS performs automatic failover by flipping the CNAME for the DB instance from primary to standby instance.Option C is incorrect because there is no changes done by RDS in the hardware.Option D is incorrect because with Multi-AZ there is no manual intervention needed for the failover. For more information on RDS Multi-AZ please visit the link - http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

You've been tasked with moving an e-commerce web application from a customer's data center into a VPC. The application must be fault tolerant and well as highly scalable. Moreover, the customer is adamant that service interruptions not affect the user experience. As you near launch, you discover that the application currently uses multicast to share session state between web servers. In order to handle session state within the VPC, you choose to which of the following option: A. Enable session stickiness via Elastic Load Balancing. B. Store session state in Amazon ElastiCache for Redis. C. Create a mesh VPN between instances and allow multicast on it. D. Store session state in Amazon Relational Database Service.

B. Store session state in Amazon ElastiCache for Redis. Answer - B Option A is incorrect because ELB does not help in storing the state; it only routes the traffic by session cookie. If the EC2 instance fails, the session will be lost.Option B is CORRECT because Redis is a fast, open source, in-memory data store and caching service. It is highly available, reliable, and with high performance suitable for the most demanding applications such as this one.Option C is incorrect because Mesh VPN is just not fault tolerant or highly scalable - the client's real priorities. It's failure would impact users. The supernode that handles the registration is a single point of failure and in case of failure, new VPN nodes would not be able to register. Also, the nodes would't register across multiple AZs. Even if it is possible it is very cumbersome.Option D is incorrect because RDS is not highly scalable. For more information on Elastic Cache, please visit the below URL: https://aws.amazon.com/elasticache/

Your customer is implementing a video-on-demand streaming platform on AWS. The requirement is to be able to support multiple devices such as iOS, Android, and Windows as client devices, using a standard client player, using streaming technology and scalable architecture with cost-effectiveness. Which architecture meets the requirements? A. Store the video contents to Amazon Simple Storage Service (S3) as an origin server. Configure the Amazon CloudFront distribution with a streaming option to stream the video contents. B. Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents C. Launch a streaming server on Amazon Elastic Compute Cloud (EC2) (for example, Adobe Media Server), and store the video contents as an origin server. Configure the Amazon CloudFront distribution with a download option to steam the video contents. D. Launch a streaming server on Amazon EC2 (for example, Adobe Media Server), and store the video contents as an origin server. Launch and configure the required amount of streaming servers on Amazon EC2 as an edge server to stream the video contents.

B. Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents Answer - B Option A is incorrect because it uses CloudFront distribution with streaming option which does not work on all platforms; where as, it should use download option.Option B is CORRECT because (a) it uses CloudFront distribution with download option for streaming the on demand videos using HLS on any mobile, and (b) it uses S3 as origin, so keeps the cost low.Option C is incorrect because (a) provisioning streaming EC2 instances is a costly solution, (b) the videos are to be delivered on-demand, not live streaming.Option D is incorrect because the videos are to be delivered on-demand, not live streaming. So, streaming server is not required. For more information on live and on-demand streaming using CloudFront, please visit the below URL: https://aws.amazon.com/blogs/aws/using-amazon-cloudfront-for-video-streaming/

Your company hosts an on-premises legacy engineering application with 900GB of data shared via a central file server. The engineering data consists of thousands of individual files ranging in size from megabytes to multiple gigabytes. Engineers typically modify 5-10 percent of the files a day. Your CTO would like to migrate this application to AWS, but only if the application can be migrated over the weekend to minimize user downtime. You calculate that it will take a minimum of 48 hours to transfer 900GB of data using your company's existing 45-Mbps Internet connection. After replicating the application's environment in AWS, which option will allow you to move the application's data to AWS without losing any data and within the given timeframe? A. Copy the data to Amazon S3 using multiple threads and multi-part upload for large files over the weekend, and work in parallel with your developers to reconfigure the replicated application environment to leverage Amazon S3 to serve the engineering files. B. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. C. Copy the application data to a 1-TB USB drive on Friday and immediately send overnight, with Saturday delivery, the USB drive to AWS Import/Export to be imported as an EBS volume, mount the resulting EBS volume to your AWS file server on Sunday D. Leverage the AWS Storage Gateway to create a Gateway-Stored volume. On Friday copy the application data to the Storage Gateway volume. After the data has been copied, perform a snapshot of the volume and restore the volume as an EBS volume to be attached to your AWS file server on Sunday.

B. Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes. Answer - B In this scenario, following important points need to be considered - (i) only fraction of the data (5-10%) is modified every day, (ii) there are only 48 hrs for the migration, (iii) downtime should be minimized, and (iv) there should be no data loss. Option A is incorrect because even though it is theoretically possible to transfer 972GB of data in 48 hours with 45Mbps speed, this option will only work if you consistently utilize the bandwidth to the max. This option will have less time in hand if there are any problems with the multipart upload. Hence, not a practical solution. Option B is a proactive approach, which is CORRECT, because the data changes are limited and can be propagated over the week. Also, the bandwidth would be used efficiently, and you would have sufficient time and bandwidth in hand, should there be any unexpected issues while migrating. Option C is incorrect because physically shipping the disk to Amazon would involve many external factors such as shipping delays, loss of shipping, damage to the disk, and also the time would not be sufficient to import the data in a day (Sunday). This is a very risky option and should not be exercised. Option D is incorrect because AWS Storage Gateway involves creating S3 snapshots and synchronizing. This option is slow and may not meet the limitation of 48 hrs downtime. Please view the below video for best practices for cloud migration to AWS: Play VideoPlayMuteCurrent Time 0:00/Duration Time 0:00Loaded: 0%Progress: 0%Stream TypeLIVERemaining Time -0:00 Playback Rate1ChaptersChaptersdescriptions off, selectedDescriptionssubtitles off, selectedSubtitlescaptions settings, opens captions settings dialogcaptions off, selectedCaptionsAudio TrackFullscreenThis is a modal window.Caption Settings DialogBeginning of dialog window. Escape will cancel and close the window.TextColorTransparencyBackgroundColorTransparencyWindowColorTransparencyFont SizeText Edge StyleFont FamilyDefaultsDone

A recent increase in a number of users of an application hosted on an EC2 instance that you manage has caused the instance's OS to run out of CPU resources and crash. The crash caused several users' unsaved data to be lost and your supervisor wants to know how this problem can be avoided in the future. Which of the following would you not recommend? Choose the correct answer from the below options: A. Redesign the application so that users' unsaved data is periodically written to disk. B. Take frequent snapshots of the EBS volume during business hours to ensure users' data is backed up. C. Snapshot the EBS volume and re-deploy the application server as a larger instance type. D. Use autoscaling to deploy additional application server instances when load is high.

B. Take frequent snapshots of the EBS volume during business hours to ensure users' data is backed up. Answer - B Option A is incorrect because this option would ensure that the user's unsaved data gets preserved just in case the instance crashes.Option B is CORRECT because taking frequent snapshots of the EBS volume during business hours may cause data loss (losing the data that is not yet written to the volume that is being backed up via snapshot) . It is strongly recommended by AWS to either detach the volume or freeze all writes before taking the snapshot to prevent data loss. Hence, this option is certainly not recommended.Option C is incorrect because using larger instance type can mitigate the problem of instance running out CPU.Option D is incorrect because AutoScaling will ensure that that at least one (or minimum number of) instance(s) would be running to ensure that the application is always up and running. For more information on EBS snapshots, please refer to the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html https://forums.aws.amazon.com/thread.jspa?threadID=92160

You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4,000 16KB reads or writes) for a total of 16,000 random IOPS on the instance. The EC2 instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 lOPs like the original four for a total of 24,000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level did not increase at all. What is the problem and a valid solution? A. Larger storage volumes support higher Provisioned IOPS rates; hence, increase the provisioned volume storage of each of the 6 EBS volumes to 1TB. B. The EBS-Optimized throughput limits the total IOPS that can be utilized; hence, use an EBS-Optimized instance that provides larger throughput. C. Small block sizes cause performance degradation, limiting the I/O throughput; hence, configure the instance device driver and file system to use 64KB blocks to increase throughput. D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6,000 IOPS. E. The standard EBS instance root volume limits the total IOPS rate; hence, change the instant root volume to also be a 500GB 4,000 Provisioned IOPS volume.

B. The EBS-Optimized throughput limits the total IOPS that can be utilized; hence, use an EBS-Optimized instance that provides larger throughput. Answer - B Option A is incorrect because increasing the volume size may not be sufficient; you will not get the higher IOPS unless the volumes are attached to EBS-optimized instance types with larger throughput (e.g. 8xlarge or higher).Option B is CORRECT because EC2 Instance types have limit on max throughput and would require 8xlarge or higher instance types to provide 24000 or more IOPS.Option C is incorrect because this option will not increase the IOPS.Option D is incorrect because the reasoning given for the issue (RAID 0 only scaling for 4 volumes) is not true.Option E is incorrect because it already has sufficient number of volumes with 4,000 PIOPS attached. So, changing the root volume to a 4,000 PIOPS will not be useful. More information on the topic from AWS documentation:Launching an instance that is EBS-optimized provides you with a dedicated connection between your EC2 instance and your EBS volume. However, it is still possible to provision EBS volumes that exceed the available bandwidth for certain instance types, especially when multiple volumes are striped in a RAID configuration. Be sure to choose an EBS-optimized instance that provides more dedicated EBS throughput than your application needs; otherwise, the Amazon EBS to Amazon EC2 connection will become a performance bottleneck. For more information on IOPS, please visit the link below:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-ec2-config.html"In order to get the most performance out of your EBS volumes, you should attach them to an instance with enough bandwidth to support your volumes, such as an EBS-optimized instance or an instance with 10 Gigabit network connectivity. This is especially important when you stripe multiple volumes together in a RAID configuration."

You decide to create a bucket on AWS S3 called 'mybucket' and then perform the following actions in the order that they are listed here. - You upload a file to the bucket called 'file1' - You enable versioning on the bucket - You upload a file called 'file2' - You upload a file called 'file3' - You upload another file called 'file2' Which of the following is true for 'mybucket'? Choose the correct option from the below: A. There will be 1 version ID for file1, there will be 2 version IDs for file2 and 1 version ID for file3 B. The version ID for file1 will be null, there will be 2 version IDs for file2 and 1 version ID for file3 C. There will be 1 version ID for file1, the version ID for file2 will be null and there will be 1 version ID for file3 D. All file version ID's will be null because versioning must be enabled before uploading objects to 'mybucket'

B. The version ID for file1 will be null, there will be 2 version IDs for file2 and 1 version ID for file3 Answer - B Objects stored in your bucket before you set the versioning state have a version ID of null. When you enable versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles the objects in future requests. Option A is incorrect because the version ID for file1 would be null.Option B is CORRECT because the file1 was put in the bucket before the versioning was enabled; hence, it will have null version ID. The file2 will have two version IDs, and file3 will have a single version ID.Option C is incorrect because file2 cannot have a null version ID as the versioning was enabled before putting it in the bucket.Option D is incorrect because once the versioning is enabled, all the files put after that will not have null version ID. But file1 was put before versioning was enabled, so it will have null as its version ID. For more information on S3 versioning, please visit the below link http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

You tried to integrate 2 systems (front end and back end) with an HTTP interface to one large system. These subsystems don't store any state inside. All of the state information is stored in a DynamoDB table. You have launched each of the subsystems with separate AMIs. After testing, these servers stopped running and are issuing malformed requests that do not meet the HTTP specifications of the client. Your developers fix the issue and deploy the fix to the subsystems as soon as possible without service disruption. What are the 3 most effective options from the below to deploy these fixes and ensure that healthy instances are redeployed? A. Use VPC. B. Use AWS Opsworks autohealing for both the front end and back end instance pair. C. Use Elastic Load balancing in front of the front-end system and Auto scaling to keep the specified number of instances. D. Use Elastic Load balancing in front of the back-end system and Auto scaling to keep the specified number of instances. E. Use Amazon Cloudfront with access the front end server with origin fetch. F. Use Amazon SQS between the front end and back end subsystems.

B. Use AWS Opsworks autohealing for both the front end and back end instance pair. C. Use Elastic Load balancing in front of the front-end system and Auto scaling to keep the specified number of instances. D. Use Elastic Load balancing in front of the back-end system and Auto scaling to keep the specified number of instances. Answer - B, C, and D Option A is incorrect because the instances should already be there in a VPC, and even if not, this option is not going to help fix the issue.Option B is CORRECT because Autohealing would try to bring the instances back up with the healthy configuration with which it was launched. Please see the "More information.." section.Option C and D are CORRECT because you can pause instances in AutoScaling, apply the patches and then add the instances back to AutoScaling and it will be registered with ELB.Option E is incorrect because deploying CloudFront is not needed in this situation.Option F is incorrect because if you deploy SQS, even the malformed requests will also get queued and later processed. You should be avoiding that. More information on Auto Healing in OpsWork:Auto healing is an excellent feature of OpsWorks and is something that provides disaster recovery within a stack. All OpsWorks instances have an agent installed which not only works to install and configure each instance using Chef, but to also update OpsWorks with resource utilization information. If auto healing is enabled at the layer, and one or more instances experiences a health-related issue where the polling stops, OpsWorks will heal the instance. When OpsWorks heals an instance, it first terminates the problem instance, and then starts a new one as per the layer configuration. Being that the configuration is pulled from the layer; the new instance will be set up exactly as the old instance which has just been terminated. For more information on Auto-healing, please refer to the below link http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autohealing.html For more information on the suspension process in AutoScaling, please refer to the below link http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

An Enterprise customer is starting their migration to the cloud, their main reason for migrating is agility, and they want to make their internal Microsoft Active Directory available to any applications running on AWS; this is so internal users only have to remember one set of credentials and as a central point of user control for leavers and joiners. How could they make their Active Directory secure, and highly available, with minimal on-premises infrastructure changes, in the most cost and time-efficient way? Choose the most appropriate option from the below: A. Using Amazon Elastic Compute Cloud (EC2), they could create a DMZ using a security group; within the security group they could provision two smaller Amazon EC2 instances that are running Openswan for resilient IPSec tunnels, and two larger instances that are domain controllers; they would use multiple Availability Zones. B. Using VPC, they could create an extension to their data center and make use of resilient hardware IPSec tunnels; they could then have two domain controller instances that are joined to their existing domain and reside within different subnets, in different Availability Zones. C. Within the customer's existing infrastructure, they could provision new hardware to run Active Directory Federation Services; this would present Active Directory as a SAML2 endpoint on the internet; any new application on AWS could be written to authenticate using SAML2 D. The customer could create a stand-alone VPC with its own Active Directory Domain Controllers; two domain controller instances could be configured, one in each Availability Zone; new applications would authenticate with those domain controllers.

B. Using VPC, they could create an extension to their data center and make use of resilient hardware IPSec tunnels; they could then have two domain controller instances that are joined to their existing domain and reside within different subnets, in different Availability Zones. Answer - B Option A incorrect because it is just a complicated environment to setup and does not meet the purpose of the requirement.Option B is CORRECT because using an IPSec tunnel can help decrypt all the traffic from the on-premise to AWS. The domain controllers in separate AZ's can address high availability.Option C is incorrect because the question mentions that they want minimal changes to the on-premise environment.Option D is incorrect because it does not address the secure communication part from on-premise to AWS. For more information on creating VPN tunnels using Hardware VPN and Virtual private gateways, please refer to the below link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

You are deploying your first EC2 instance in AWS and are using the AWS console to do this. You have chosen your AMI and your instance type and have now come to the screen where you configure your instance details. One of the things that you need to decide is whether you want to auto-assign a public IP address or not. You assume that if you do not choose this option you will be able to assign an Elastic IP address later, which happens to be a correct assumption. Which of the below options best describes why an Elastic IP address would be preferable to a public IP address? Choose the correct option from the below: A. An Elastic IP address is free, whilst you must pay for a public IP address. B. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. C. You can have an unlimited amount of Elastic IP addresses, however public IP addresses are limited in number. D. An Elastic IP address cannot be accessed from the internet like a public IP address and hence is safer from a security standpoint.

B. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Answer - B Option A is incorrect because public IP addresses are free.Option B is CORRECT because in case of an instance failure, you can reassign the EIP to a new instance, thus you do not need to change any reference to the IP address in your application.Option C is incorrect because the number of EIPs per account per region is limited (5).Option D is incorrect because EIPs are accessible from the internet. More information on EIPsAn Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.An Elastic IP address is a public IPv4 address, which is reachable from the internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet; for example, to connect to your instance from your local computer.http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template? Choose an option from the below: A. You can use intrinsic functions in any part of a template. B. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes. C. You can use intrinsic functions only in the resource properties part of a template. D. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description.

B. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes. Answer - B As per AWS documentation:You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources. Hence, B is the correct answer. For more information on intrinsic function please refer to the below link http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html

One of your requirements is to setup an S3 bucket to store your files like documents and images. However, those objects should not be directly accessible via the S3 URL, they should only be accessible from pages on your website so that only your paying customers can see them. How could you implement this? Choose the correct option from the below: A. Use HTTPS endpoints to encrypt your data. B. You can use a bucket policy and check for the AWS: Referer key in a condition, where that key matches your domain C. You can't. The S3 URL must be public in order to use it on your website. D. You can use server-side and client-side encryption, where only your application can decrypt the objects

B. You can use a bucket policy and check for the AWS: Referer key in a condition, where that key matches your domain Answer - B Suppose you have a website with the domain name (www.example.com or example.com) with links to photos and videos stored in your S3 bucket, examplebucket. By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific web pages. Option A is incorrect because HTTPS endpoint will not ensure that only authenticated users can get access to the content. Option B is CORRECT because it defines appropriate bucket policy to give the access to the S3 content to the authenticated users.Option C is incorrect because you can control the access to the S3 content via bucket policy.Option D is incorrect because the question is not about encrypting/decrypting the data. To give access to the S3 content to certain users, proper bucket policy needs to be defined. For more information on S3 bucket policy examples, please visit the link http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name example.com. You decide to use Route53 Latency-Based Routing to serve web requests to the users from the region closest to them. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region.During a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers) A. Latency resource record sets cannot be used in combination with weighted resource record sets. B. You did not setup an HTTP health check to one or more of the weighted resource record sets associated with the disabled web servers. C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region. D. One of the two working web servers in the other region did not pass its HTTP health check. E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example.com in the region where you disabled the servers.

B. You did not setup an HTTP health check to one or more of the weighted resource record sets associated with the disabled web servers. E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example.com in the region where you disabled the servers. Answer - B & E Option A is incorrect because you can setup weighted record sets as the failover or secondary record set.Option B is CORRECT because if the HTTP health check is not set with the weighted resource record sets of the disabled web servers, Route 53 will consider them healthy, and will continue to forward the traffic to them. Once the health check is enabled, the DNS queries will get a response indicating that the web servers are disabled, and then the requests would get routed to the other region.Option C is incorrect because even if the weight is lower for the region with disabled web servers, Route 53 will continue forwarding the requests of the users closest to that region because it will evaluate the latency record set first.Option D is incorrect because, even if one of the servers fail, the other server will still work and the traffic should get the traffic.Option E is CORRECT because if the "Evaluate Target Health" is not set to "Yes" for the region containing the disabled web servers, the Route 53 will consider the health of the record set as healthy and will continue to route the traffic to it. For more information on How Amazon Route 53 chooses records when Health Checking is configured, please visit the link below:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-how-route-53-chooses-records.html https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html#dns-failover-complex-configs-eth-no

An organization has added 3 of his AWS accounts to consolidated billing. One of the AWS accounts has purchased a Reserved Instance (RI) of a small instance size in the US-East-1a zone. All other AWS accounts are running instances of a small size in the same zone. What will happen in this case for the RI pricing? A. Only the account that has purchased the RI will get the advantage of RI pricing. B. One instance of a small size and running in the US-East-1a zone of each AWS account will get the benefit of RI pricing. C. Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size. D. If there are more than one instances of a small size running across multiple accounts in the same zone no one will get the benefit of RI.

C. Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size. Answer - C Option A is incorrect because the price benefit of the reserved instances is applicable to all the accounts that are part of the consolidated billing group, not just the payer account (or the account that has reserved the instance) - for the total number of instances reserved. Option B is incorrect because, since only a single instance is reserved, any one instance across all the accounts will receive the reserved instance price benefit.Option C is CORRECT because the reserved price benefit will be applied to a single EC2 instance across all the accounts. Option D is incorrect because the total number of instances that will receive the cost benefit will be same as the total number of reserved instances (in this case it's one). More information on Consolidated Billing:As per the AWS documentation for billing purposes, AWS treats all the accounts on the consolidated bill as if they were one account. Some services, such as Amazon EC2 and Amazon S3, have volume pricing tiers across certain usage dimensions that give you lower prices when you use the service more. With consolidated billing, AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible. For more information on Consolidated billing, please visit the URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html

There is a requirement for a high availability and disaster recovery plan for an organization. Below are the key points for this plan Once stored successfully, the data should not be lost. This is the key requirement. Recovery time can be long as this could save on cost. Which of the following options would be the best one for this corporation, given the concerns that they have outlined to you above? Choose the correct answer from the below options: A. Make sure you have RDS set up as an asynchronous Multi-AZ deployment, which automatically provisions and maintains an asynchronous "standby" replica in a different Availability Zone. B. Set up a number of smaller instances in a different region, which all have Auto Scaling and Elastic Load Balancing enabled. If there is a network outage, then these instances will auto scale up. As long as spot instances are used and the instances are small this should remain a cost effective solution. C. Backup and restoring with S3 should be considered due to the low cost of S3 storage. Backup up frequently and the data can be sent to S3 using either Direct Connect or Storage Gateway, or over the Internet. D. Set up pre-configured servers using Amazon Machine Images. Use an Elastic IP and Route 53 to quickly switch over to your new infrastructure if there are any problems when you run your health checks.

C. Backup and restoring with S3 should be considered due to the low cost of S3 storage. Backup up frequently and the data can be sent to S3 using either Direct Connect or Storage Gateway, or over the Internet. Answer - C Option A is incorrect because it can help in maintaining data, but is not low on cost and is a high-cost option since you need to maintain a multi-AZ environment. Hence we need to count this option out. Option B is incorrect because it does not talk about data loss avoidance and is more of network avoidance.Option C is CORRECT because S3 provides durable, highly available, low cost and more secure storage solution. Option D is incorrect because it talks about AMI's but not about the underlying data on EBS storage which will need to be backed up. More information about Amazon S3: Amazon S3 is storage for the Internet. It's a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs. For more information on S3 please refer to the below link http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible? A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard. B. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard. C. Create an RDS Read Replica for the batch analysis and SNS to notify the on-premises system to update the dashboard. D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

C. Create an RDS Read Replica for the batch analysis and SNS to notify the on-premises system to update the dashboard. Answer - C There are two architectural considerations here. (1) you need to improve read performance by reducing the load on the RDS MySQL instance, and (2) automate the process of notifying to the on-premise system.When the scenario asks you to improve the read performance of a DB instance, always look for options such as ElastiCache or Read Replicas. And when the question asks you to automate the notification process, always think of using SNS. Option A is incorrect because Redshift is used for OLAP scenarios whereas RDS is used for OLTP scenarios. Hence, replacing RDS with Redshift is not a solution. Option B is incorrect because Redshift is used for OLAP scenarios whereas RDS is used for OLTP scenarios. Hence, replacing RDS with Redshift is not a solution.Option C is CORRECT because (a) it uses Read Replicas which improves the read performance, and (b) it uses SNS which automates the process of notifying the on-premise system to update the dashboard.Option D is incorrect because SQS is not a service to be used for sending the notification. For more information on Read Replica's, please visit the below link https://aws.amazon.com/rds/details/read-replicas/

A gaming company adopted AWS Cloud Formation to automate load-testing of their games. They have created an AWS Cloud Formation template for each gaming environment including one for the load-testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS Cloud Formation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis. Select possible solutions that allow access to the test results after the AWS Cloud Formation load -testing stack is deleted. Choose 2 options from the below:. A. Define an Amazon RDS Read-Replica in the load-testing AWS CloudFormation stack and define a dependency relation between master and replica via the DependsOn attribute. B. Define an update policy to prevent deletion of the Amazon RDS database after the AWS CloudFormation stack is deleted. C. Define a deletion policy of type Retain for the Amazon RDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack. D. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted. E. Define automated backups with a backup retention period of 30 days for the Amazon RDS database and perform point-in-time recovery of the database after the AWS CloudFormation stack is deleted.

C. Define a deletion policy of type Retain for the Amazon RDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack. D. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted. Answer - C and D Option A is incorrect because (a) creation of read replicas is not needed in this scenario, and (b) they would anyways be deleted after the stacks get deleted, so there is no need to define any dependency in the template.Option B is incorrect because UpdatePolicy attribute is only applicable to certain resources like AutoScalingGroup, AWS Lambda Alias. It is not applicable to RDS.Option C is CORRECT because, with Retain deletion policy, the RDS resources would be preserved for the visualization and analysis after the stack gets deleted.Option D is CORRECT because, with the Snapshot deletion policy, a snapshot of the RDS instance would get created for visualization and analysis later after the stack gets deleted.Option E is incorrect because automated snapshots are not needed in this case. All that is needed is a single snapshot of the RDS instance after the test gets finished - which can be taken via Snapshot deletion policy. NOTE: This question is asking for two possible answers. It does not say that both need to be used at the same time. Hence both C and D are valid options. For more information on deletion policy, please visit the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

Which of the following must be done while generating a pre-signed URL in S3 in order to ensure that the user who is given the pre-signed URL has the permission to upload the object? A. Ensure the user has write permission to S3. B. Ensure the user has read permission to S3. C. Ensure that the person who has created the pre-signed URL has the permission to upload the object to the appropriate S3 bucket. D. Create a Cloudfront distribution.

C. Ensure that the person who has created the pre-signed URL has the permission to upload the object to the appropriate S3 bucket. Answer - C Option A is incorrect because if the person who has created the pre-signed URL does not have write permission to S3, the person who is given the pre-signed URL will not have it either.Option A is incorrect because if the person who has created the pre-signed URL does not have read permission to S3, the person who is given the pre-signed URL will not have it either.Option C is CORRECT because in order to successfully upload an object to S3, the pre-signed URL must be created by someone who has permission to perform the operation that the pre-signed URL is based upon.Option D is incorrect because CloudFront distribution is not needed in this scenario. For more information on pre-signed URL's, please visit the below URL http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

There is a requirement to create EMR jobs that shift through all of the web server logs and error logs to pull statistics on click stream and errors based off of client IP address. Given the requirements what would be the best method for collecting the log data and analyzing it automatically? Choose the correct answer from the below options: A. If the application is using HTTP, you need to configure proxy protocol to pass the client IP address in a new HTTP header. If the application is using TCP, modify the application code to pull the client IP into the x-forward-for header so the web servers can parse it. B. Configure ELB access logs then create a Data Pipeline job which imports the logs from an S3 bucket into EMR for analyzing and output the EMR data into a new S3 bucket. C. If the application is using TCP, configure proxy protocol to pass the client IP address in a new TCP header. If the application is using, HTTP modify the application code to pull the client IP into the x-forward-for header so the web servers can parse it. D. Configure ELB error logs then create a Data Pipeline job which imports the logs from an S3 bucket into EMR for analyzing and outputs the EMR data into a new S3 bucket.

C. If the application is using TCP, configure proxy protocol to pass the client IP address in a new TCP header. If the application is using, HTTP modify the application code to pull the client IP into the x-forward-for header so the web servers can parse it. Answer - C Option A is incorrect because (a) if the protocol is TCP, you can use proxy protocol to pass the client IP address in a new TCP header, and (b) x-forward-for header is to be used only if the protocol is HTTP.Option B is incorrect because it does not specify how error logs would be configured and analyzed.Option C is CORRECT because (a) the requirement is to scan both the web server logs and error logs, and (b) the webserver being behind the ELB would not receive the client IP address and would need proxy protocol for TCP or x-forward-for header for HTTP traffic.Option D is incorrect because it does not specify how access logs would be configured and analyzed. For more information on HTTP Headers and Classic ELB, please refer to the links below:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html

You are the new IT architect in a company that operates a mobile sleep tracking application. When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend. The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SNS mobile push notifications that new data is available, which is parsed and visualized by the mobile app. Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost.What would you recommend? Choose 2 answers: A. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3. B. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3. C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput. D. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput. E. Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3.

C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput. E. Create a new Amazon DynamoDB table each day and drop the one for the previous day after its data is on Amazon S3. Answers - C & E Option A is incorrect because, accessing the DynamoDB table for read and write by 100k users will exhaust the read and write capacity, which will increase the cost drastically.Option B is incorrect because, creating clusters of EC2 instances will be a very expensive solution in this scenario.Option C is CORRECT because, (a) with SQS, the huge number of writes overnight will be buffered/queued which will avoid exhausting the write capacity (hence, cutting down on cost), and (b) SQS can handle a sudden high load, if any.Option D is incorrect because, the data is not directly accessed from the DynamoDB table by the users, it is accessed from S3. So, there is no need for caching. Since the results are stored in S3, introducing ElastiCache is unnecessary.Option E is CORRECT because once the aggregated data is stored on S3, there is no point in keeping the DynamoDB tables pertaining to the previous days. Keeping the tables for the latest data only will certainly cut the unnecessary costs, keeping the overall cost of the solution down.

A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 by mistake. The user is trying to create another subnet of CIDR 20.0.1.0/24. How can the user create the second subnet? A. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based on the second subnet's CIDR. B. The user can modify the first subnet CIDR from the console. C. It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created. D. The user can modify the first subnet CIDR with AWS CLI.

C. It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created. Answer - C Once you create a subnet, you cannot modify it, you have to delete it. Hence option B and D are incorrect. Also, the VPC will not create the second subnet because of the overlapping CIDR and hence you need to delete and recreate the subnet again. Below is the screenshot of what happens when you try to create a subnet of CIDR 20.0.1.0/24 on an existing subnet of 20.0.0.0/16 For more information on VPC and subnets, please visit the link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

What of the following is true about the features Lambda@Edge in AWS? Choose an answer from the options given below: A. It is used specifically for the Edge based programing language. B. It is used for running Lambda functions at edge locations defined by S3. C. It is used for running Lambda functions at edge locations used by CloudFront. D. It can support any type of programming language.

C. It is used for running Lambda functions at edge locations used by CloudFront. Answer - C Option A is incorrect as it is not used for Edge based programming.Option B is incorrect because edge locations are part of CloudFront setup, not S3.Option C is CORRECT because Lambda@Edge allows you to run Lambda functions at the AWS edge locations in response to CloudFront events. Without Lambda@Edge, customized processing requires requests to be forwarded back to compute resources at the centralized servers. This slows down the user experience. Option D is incorrect because Lambda@Edge supports only Node.js, which is a server-side JavaScript framework. For more information on Lambda@Edge please visit the link: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/what-is-lambda-at-edge.html

Someone on your team configured a Virtual Private Cloud with two public subnets in two separate AZs and two private subnets in two separate AZs. Each public subnet AZ has a matching private subnet AZ. The VPC and its subnets are properly configured. You also notice that there are multiple webserver instances in the private subnet, and you've been charged with setting up a public-facing Elastic Load Balancer which will accept requests from clients and distribute those requests to the webserver instances. How can you set this up without making any significant architectural changes? Choose the correct option from the below: A. Select both of the private subnets which contain the webserver instances when configuring the ELB. B. Put the webserver instances in the public subnets and then configure the ELB with those subnets. C. Select both of the public subnets when configuring the ELB. D. You can't. Webserver instances must be in public subnets in order for this to work.

C. Select both of the public subnets when configuring the ELB. Answer - C Option A is incorrect because you need to setup the internet facing load balancer, to which the public subnets need to be associated.Option B is incorrect because web servers need to remain in the private subnets. Shifting them to the public subnet would be a significant architectural change.Option C is CORRECT because you need to associate the public subnets with the internet facing load balancer. You would also need to ensure that the security group that is assigned to the load balancer has the listener ports open and the security groups of the private instances allow traffic on the listener ports and the health check ports.Option D is incorrect because you can configure the internet facing load balancer with the public subnet. For more information on the AWS ELB, please refer to the below link:https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/ https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/

A user has created a mobile application which makes calls to DynamoDB to fetch certain data. The application is using the DynamoDB SDK and root account access/secret access key to connect to DynamoDB from mobile. Which of the below-mentioned statements is true with respect to the best practice for security in this scenario? A. The user should create a separate IAM user for each mobile application and provide DynamoDB access with it. B. The user should create an IAM role with DynamoDB and EC2 access. Attach the role with EC2 and route all calls from the mobile through EC2. C. The application should use an IAM role with web identity federation which validates calls to DynamoDB with identity providers, such as Google, Amazon, and Facebook. D. Create an IAM Role with DynamoDB access and attach it with the mobile application.

C. The application should use an IAM role with web identity federation which validates calls to DynamoDB with identity providers, such as Google, Amazon, and Facebook. Answer - C Option A is incorrect because creating a separate user for each application user is not a feasible, secure, and recommended solution.Option B is incorrect because the mobile users may not all be AWS users. You need to give access to the mobile application via federated identity provider.Option C is CORRECT because it creates a role for Federated Users which enables the users to sign in to the app using their Amazon, Facebook, or Google identity and authorize them to seamlessly access DynamoDB.Option D is incorrect because creating IAM Role is not sufficient. You need to authenticate the users of the application via web identity provider, then get the temporary credentials via a Security Token Service (STS) and then access DynamoDB. More information on Web Identity Federation: With Web Identity Federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. For more information on Web Identity Federation, please visit the link http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

A marketing research company has developed a tracking system that collects user behavior during web marketing campaigns on behalf of the customers all over the world. The tracking system consists of an auto-scaled group of EC2 instances behind an ELB. And the collected data is stored in DynamoDB. After the campaign is terminated the tracking system is torn down and the data is moved to Amazon Redshift, where it is aggregated and used to generate detailed reports. The company wants to be able to instantiate new tracking systems in any region without any manual intervention and therefore adopted CloudFormation. What needs to be done to make sure that the AWS Cloudformation template works for every AWS region? Choose 2 options from the below: A. Avoid using Deletion Policies for the EBS snapshots. B. The names of the DynamoDB tables must be different in every target region. C. Use the built-in function of Cloudformation to set the AZ attribute of the ELB resource. D. IAM users with the right to start Cloudformation stacks must be defined for every target region. E. Use the built-in Mappings and FindInMap functions of AWS Cloudformation to refer to the AMI ID set in the ImageID attribute of the Autoscaling::LaunchConfiguration resource.

C. Use the built-in function of Cloudformation to set the AZ attribute of the ELB resource. E. Use the built-in Mappings and FindInMap functions of AWS Cloudformation to refer to the AMI ID set in the ImageID attribute of the Autoscaling::LaunchConfiguration resource. Answer - C and E Option A is incorrect because you need to retain or keep the snapshots of the EBS volumes in order to launch similar instances in the new region.Option B is incorrect because DynamoDB table with the same name can be created in different regions. They have to be unique in a single region.Option C is CORRECT because you need to get the name of the Availability Zone based on the region in which the template would be used.Option D is incorrect because you do not need to define IAM users per region as they are global.Option E is CORRECT because the AMI ID would be needed to launch the similar instances in the new region where the template would be used. More information on CloudFormation intrinsic functions: You can use the Fn::GetAZs function of CloudFormation to get the AZ of the region and assign it to the ELB. An example of the Fn::GetAZs function is given below { "Fn::GetAZs" : "" } { "Fn::GetAZs" : { "Ref" : "AWS::Region" } } { "Fn::GetAZs" : "us-east-1" } An example of the FindInMap is shown below. This is useful when you want to get particular values region wise which can be used as parameters. Since the Launch configuration contains the AMI ID information and since the AMI ID is different in different regions, you need to recreate the Launch Configurations based on the AMI ID. { ... "Mappings" : { "RegionMap" : { "us-east-1" : { "32" : "ami-6411e20d", "64" : "ami-7a11e213" }, "us-west-1" : { "32" : "ami-c9c7978c", "64" : "ami-cfc7978a" }, "eu-west-1" : { "32" : "ami-37c2f643", "64" : "ami-31c2f645" }, "ap-southeast-1" : { "32" : "ami-66f28c34", "64" : "ami-60f28c32" }, "ap-northeast-1" : { "32" : "ami-9c03a89d", "64" : "ami-a003a8a1" } } }, "Resources" : { "myEC2Instance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "32"]}, "InstanceType" : "m1.small" } } } } For more information on the Fn::FindInMap function, please refer to below linkhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-findinmap.html

A company is making extensive use of S3. They have a strict security policy and require that all artifacts are stored securely in S3. Which of the following request headers, when specified in an API call, will cause an object to be SSE? Choose the correct option from the below: A. AES256 B. amz-server-side-encryption C. x-amz-server-side-encryption D. server-side-encryption

C. x-amz-server-side-encryption Answer - C Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. The object creation REST APIs (see Specifying Server-Side Encryption Using the REST API) provides a request header, x-amz-server-side-encryption that you can use to request server-side encryption.To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.PUT /example-object HTTP/1.1 Host: myBucket.s3.amazonaws.com Date: Wed, 8 Jun 2016 17:50:00 GMT Authorization: authorization string Content-Type: text/plain Content-Length: 11434 x-amz-meta-author: Janet Expect: 100-continue x-amz-server-side-encryption: AES256 [11434 bytes of object data] In order to enforce object encryption, create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption header. There are two possible values for the x-amz-server-side-encryption header: AES256, which tells S3 to use S3-managed keys, and aws:kms, which tells S3 to use AWS KMS-managed keys. For more information on S3 encryption, please visit the link http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

Your company runs a customer facing event registration site which is built with a 3-tier architecture with web and application tier servers, and a MySQL database. The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability? A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ. B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the two other AZs. C. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELB, and a Multi-AZ RDS (Relational Database Service) deployment. D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and a Multi-AZ RDS (Relational Database services) deployment.

D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and a Multi-AZ RDS (Relational Database services) deployment. Answer - D In this scenario, the application can run on minimum 65% of the overall capacity of servers. I.e. it can run on minimum 4 web and 4 application servers. Since there are 3 AZs, there are many ways the instances can be put across them. The most important point to consider is that even of an entire AZ becomes unavailable, there should be minimum 4 servers running. So, placing 3 servers in 2 AZs is not a good architecture. Based on this, option A and C are incorrect. The best solution would be to have 2 servers in each AZ. So, in case of an entire AZ being unavailable, the application still has 4 servers available. Now, regarding RDS instance, the high availability is provided by the Multi-AZ deployment, not by read replicas (although they improve the performance in case of read-heavy workload). So, option B is incorrect.Hence, option D is CORRECT because (a) it places 2 EC2 instances in each of the 3 AZs, and (b) it uses the Multi-AZ deployment of RDS.

Which of the following are some of the best examples where Amazon Kinesis can be used? A. Accelerated log and data feed intake B. Real-time metrics and reporting C. Real-time data analytics D. All of the above

D. All of the above Answer - D The following are typical scenarios for using Kinesis Data Streams:Accelerated log and data feed intake and processing You can have producers push data directly into a stream. For example, push system and application logs and they are available for processing in seconds. This prevents the log data from being lost if the front end or application server fails. Kinesis Data Streams provides accelerated data feed intake because you don't batch the data on the servers before you submit it for intake.Real-time metrics and reporting You can use data collected into Kinesis Data Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.Real-time data analytics This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then analyze site usability engagement using multiple different Kinesis Data Streams applications running in parallel.Complex stream processing You can create Directed Acyclic Graphs (DAGs) of Amazon Kinesis Data Streams applications and data streams. This typically involves putting data from multiple Amazon Kinesis Data Streams applications into another stream for downstream processing by a different Amazon Kinesis Data Streams application. For more information on Kinesis, please refer to the below URL: https://docs.aws.amazon.com/streams/latest/dev/introduction.html

You are managing the AWS account of a big organization. The organization has more than 1000+ employees and they want to provide access to the various services to most of the employees. Which of the below-mentioned options is the best possible solution in this case? A. The user should create a separate IAM user for each employee and provide access to them as per the policy. B. The user should create an IAM role and attach STS with the role. The user should attach that role to the EC2 instance and setup AWS authentication on that server. C. The user should create IAM groups as per the organization's departments and add each user to the group for better access control. D. Attach an IAM role with the organization's authentication service to authorize each user for various AWS services.

D. Attach an IAM role with the organization's authentication service to authorize each user for various AWS services. Answer - D The best practice for IAM is to create roles which have specific access to an AWS service and then give the user permission to the AWS service via the role. Option A is incorrect because creating a separate IAM user is not a feasible solution here. Instead, creating an IAM role would be more appropriate solution.Option B is incorrect because this is an invalid workflow of using IAM roles for authenticating the users.Option C is incorrect because you should be creating IAM Role rather than IAM Users which will be added to the IAM group.Option D is CORRECT because it authenticates the users with the organization's authentication service and creates an appropriate IAM Role for accessing the AWS services. For the best practices on IAM policies, please visit the link http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

<p>You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC). The previous architect has already deployed a 3- tier VPC. The configuration is as follows: </p> <p>VPC vpc-2f8bc447</p> <p>IGW ig-2d8bc445</p> <p>NACL acl-208bc448</p> <p>Subnets and Route Tables:</p> <p>Web server's subnet-258bc44d</p> <p>Application server's subnet-248bc44c</p> <p>Database server's subnet-9189c6f9</p> <p>Route Tables:</p> <p>rtb-218bc449</p> <p>rtb-238bc44b</p> <p>Associations:</p> <p>Subnet-258bc44d: rtb-218bc449</p> <p>Subnet-248bc44c: rtb-238bc44b</p> <p>Subnet-9189c6f9: rtb-238bc44b</p> <p>You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?</p> A. Create a bastion and NAT Instance in subnet-258bc44d and add a route from rtb-238bc44b to subnet-258bc44d. B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within Subnet-248bc44c. C. Create a Bastion and NAT Instance in subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet-248bc44c. D. Create a Bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance.

D. Create a Bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance. Answer - D Option A is incorrect because the route should be pointing to NAT.Option B is incorrect because adding IGW to route rtb-238bc44b would expose the application and database server to internet. Bastion and NAT should be in public subnet.Option C is incorrect because the route should point to NAT and not Internet Gateway else it would be internet accessible.Option D is CORRECT because Bastion and NAT should be in the public subnet. As Web Server has direct access to Internet, the subnet subnet-258bc44d should be public and Route rtb-2i8bc449 pointing to IGW. Route rtb-238bc44b for private subnets should point to NAT for outgoing internet access.

The company runs a complex customer system and consists of 10 different software components all backed up by RDS. You adopted Opswork to simplify management and deployment of that application and created a stack and layers for each component. A security policy requires that all instances should run on the latest AMI and that instances must be replaced within one month after the latest AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as the new AMI is released. Choose 2 options which meet your requirements: A. Assign a custom recipe to each layer which replaces the underlying AMI. Use OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI. B. Specify the latest AMI as the custom AMI at the stack level terminates instances of the stack and let OpsWork launch new instances with the new AMI. C. Identify all EC2 instances of your OpsWork stack, stop each instance, replace the AMI ID property with the latest AMI ID, and restart the instance. To avoid down time, make sure no more than one instance is stopped at the same time. D. Create a new stack and layers with identical configuration, add instances with the latest AMI specified as a custom AMI to the new layers, switch DNS to the new stack, and tear down the old stack. E. Add new instances with the latest Amazon AMI as a custom AMI to all OpsWork layers of your stack and terminate the old ones.

D. Create a new stack and layers with identical configuration, add instances with the latest AMI specified as a custom AMI to the new layers, switch DNS to the new stack, and tear down the old stack. E. Add new instances with the latest Amazon AMI as a custom AMI to all OpsWork layers of your stack and terminate the old ones. Answer - D and E Option A is incorrect because to change the AMI you would have to re-launch new instances and you can&#39;t do that with chef recipes only.Option B is incorrect because the AMI replacements should be done without incurring application downtime or capacity problems. So if you shutdown the stack, all applications will be stopped.Option C is incorrect because the application could face the problem of insufficient capacity.Option D is CORRECT because it represents a common practice of Blue-Green Deployment which is carried out for reducing the downtime and risk by running two identical production environments called Blue and Green. Please see &quot;More information..&quot; section for additional details.Option E is CORRECT because you can only add new instances at the layer level by specifying to use Custom AMI at the stack level. More information on Blue-Green Deployment:Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the software in Green, you switch the router so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.This technique can eliminate downtime due to application deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue. Please refer to the below URL for more details https://d0.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

Which of the below-mentioned methods is the best to stop a series of attacks coming from a set of determined IP ranges? A. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (internet Gateway) B. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP C. Create 15 Security Group rules to block the attacking IP addresses over port 80 D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses

D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses Answer - D In this scenario, the attack is coming from a set of certain IP addresses over specific port from a specific country. You are supposed to defend against this attack. In such questions, always think about two options: Security groups and Network Access Control List (NACL). Security Groups operate at the individual instance level, whereas NACL operates at subnet level. You should always fortify the NACL first, as it is encounter first during the communication with the instances in the VPC. Option A is incorrect because IP addresses cannot be blocked using route table or IGW. Option B is incorrect because changing the EIP of NAT instance cannot block the incoming traffic from a particular IP address. Option C is incorrect because (a) you cannot deny port access using security groups, and (b) by default all requests are denied; you open access for particular IP address or range. You cannot deny access for particular IP addresses using security groups. Option D is CORRECT because (a) you can add deny rules in NACL and block access to certain IP addresses. See an example below:

A user has configured an SSL listener at ELB as well as on the back-end instances. Which of the below-mentioned statements helps the user understand ELB traffic handling with respect to the SSL listener? A. It is not possible to have the SSL listener both at ELB and back-end instances. B. ELB will modify headers to add requestor details. C. ELB will intercept the request to add the cookie details if sticky session is enabled. D. ELB will not modify the headers.

D. ELB will not modify the headers. Answer - D Option A is invalid because if the front-end connection uses TCP or SSL, then your back-end connections can use either TCP or SSL. If the front-end connection uses HTTP or HTTPS, then your back-end connections can use either HTTP or HTTPS.Option B is invalid because when you use TCP/SSL for both front-end and back-end connections, your load balancer forwards the request to the back-end instances without modifying the headers. Option C is invalid because with this configuration you do not receive cookies for session stickiness. But, when you use HTTP/HTTPS, you can enable sticky sessions on your load balancer. Option D is CORRECT because with SSL configuration, the load balancer will forward the request to the back-end instances without modifying the headers. For more information on ELB, please visit the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html

You have been asked to leverage Amazon VPC EC2 and SQS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure that your application has sufficient bandwidth between your EC2 instances and SQS. Which option will provide the most scalable solution for communicating between the application and SQS? A. Ensure the application instances are properly configured with an Elastic Load Balancer. B. Ensure the application instances are launched in private subnets with the EBS-optimized option enabled. C. Ensure the application instances are launched in private subnets with the associate-public-IP-address=true option enabled. Remove any NAT instance from the public subnet, if any. D. Ensure the application instances are launched in public subnets with an Auto Scaling group and Auto Scaling triggers are configured to watch the SQS queue size.

D. Ensure the application instances are launched in public subnets with an Auto Scaling group and Auto Scaling triggers are configured to watch the SQS queue size. Answer - D For the exam, remember that Amazon SQS is an Internet-based service. To connect to the Amazon SQS Endpoint (sqs.us-east-1.amazonaws.com), the Amazon EC2 instance requires access to the Internet. Hence, either it should be in a public subnet or be in a private subnet with a NAT instance/gateway in the public subnet. Option A is incorrect because ELB does not ensure scalability.Option B is incorrect because (a) EBS-optimized option will not contribute to scalability, and (b) there should be a NAT instance/gateway in the public subnet of the VPC for accessing SQS.Option C is incorrect because if you remove the NAT instance, the EC2 instance cannot access SQS service.Option D is CORRECT because (a) it uses Auto Scaling for ensuring scalability of the application, and (b) it has instances in the public subnet so they can access the SQS service over the internet. For more information on SQS, please visit the below URL https://aws.amazon.com/sqs/faqs/

A user has created a VPC with public and private subnets using the VPC wizard. The user has not launched any instance manually and is trying to delete the VPC. What will happen in this scenario? A. It will not allow to delete the VPC as it has subnets with route tables. B. It will not allow to delete the VPC since it has a running route instance. C. It will terminate the VPC along with all the instances launched by the wizard. D. It will not allow to delete the VPC since it has a running NAT instance.

D. It will not allow to delete the VPC since it has a running NAT instance. Answer - D Since the VPC will contain a NAT instance because of the private/public subnet combination, when you try to delete the VPC you will get the below error message. Hence, option D is CORRECT. For more information on VPC and subnets please visit the link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

A mobile application has been developed which stores data in DynamoDB. The application needs to scale to handle millions of views. The customer also needs access to the data in the DynamoDB table as part of the application. Which of the below methods would help to fulfill this requirement? A. Configure an on-premise AD server utilizing SAML 2.0 to manage the application users inside of the on-premise AD server and write code that authenticates against the LD serves. Grant a role assigned to the STS token to allow the end-user to access the required data in the DynamoDB table. B. Let the users sign into the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWith API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in JavaScript and host the JavaScript interface in an S3 bucket. C. Let the users sign into the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWithWebIdentity API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in a server-side language using the AWS SDK and host the application in an S3 bucket for scalability. D. Let the users sign in to the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWithWebIdentity API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in JavaScript and host the JavaScript interface in an S3 bucket.

D. Let the users sign in to the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWithWebIdentity API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in JavaScript and host the JavaScript interface in an S3 bucket. Answer - D The AssumeRolewithWebIdentity returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider, such as Amazon Cognito, Login with Amazon, Facebook, Google, or any OpenID Connect-compatible identity provider. Out of option C and D, Option C is invalid because S3 is used to host static websites and not server side language websites. For more information on AssumeRolewithWebIdentity, please visit the below URL: http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html

Currently, a company uses Redshift to store its analyzed data. They have started with the base configuration. What would they get when they initially start using Redshift? A. Two nodes with 320GB each B. One node of 320GB C. Two nodes with 160GB each D. One node of 160GB

D. One node of 160GB Answer - D As per the AWS documentation, For more information on Redshift please refer to the below URL: https://aws.amazon.com/redshift/faqs/

An organization has configured Auto Scaling with ELB. There is a memory issue in the application which is causing CPU utilization to go above 90%. The higher CPU usage triggers an event for Auto Scaling as per the scaling policy. If the user wants to find the root cause inside the application without triggering a scaling activity, how can he achieve this? A. Stop the scaling process until research is completed. B. It is not possible to find the root cause from that instance without triggering scaling. C. Delete AutoScaling group until research is completed. D. Suspend the scaling process until research is completed.

D. Suspend the scaling process until research is completed. Answer - D In this scenario, the user wants to investigate the problem during the AutoScaling process without triggering the scaling activity. For this, the user can leverage the suspend and resume option available on AutoScaling. Option A is incorrect because the scaling process need not be stopped, it can be suspended so that it can be resumed.Option B is incorrect because scaling can be momentarily suspended until the investigation is completed.Option C is incorrect because AutoScaling group is totally unnecessary in this scenario.Option D is CORRECT because you can suspend and then resume one or more of the scaling processes for your Auto Scaling group if you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without triggering the scaling processes. For more information on suspending AutoScaling processes, please visit the link http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

A user is using a small MySQL RDS DB. The user is experiencing high latency due to the Multi-AZ feature. Which of the below-mentioned options may not help the user in this situation? A. Schedule the automated backup in non-working hours B. Use a large or higher size instance C. Use Provisioned IOPS storage D. Take a snapshot from standby Replica

D. Take a snapshot from standby Replica Answer - D Option A is incorrect because scheduling the automated backups in non-working hours will reduce the load on the DB instance and will help reducing the latency.Option B is incorrect because using a larger instance would help processing the queries and carry out load efficiently, thus reducing the overall latency.Option C is incorrect because using the provisioned IOPS, the users would get high throughput from the DB instance.Option D is CORRECT because taking the snapshots from the read replica is not going to affect the RDS instance. Hence, the users will keep experiencing the high latency as they currently are. More information on Multi-AZ deployment: In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups As per AWS, the below are the best practices for multiAZ. For production workloads, we recommend you use Provisioned IOPS and DB instance classes (m1.large and larger) that are optimized for Provisioned IOPS for fast, consistent performance. Hence option B and C are valid. Also if backups are scheduled during working hours, then I/O can be suspended and increase the latency of the DB, hence it is better to schedule outside of office hours. For more information on Multi-AZ RDS, please visit the link: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

You&#39;ve been working on a CloudFront whole site CDN. After configuring the whole site CDN with a custom CNAME and supported HTTPS custom domain (i.e., https://domain.com) you open domain.com and are receiving the following error: "CloudFront wasn&#39;t able to connect to the origin." What might be the most likely cause of this error and how would you fix it? Choose the correct answer from the below options: A. The HTTPS certificate is expired or missing a third party signer. To resolve this purchase and add a new SSL certificate. B. HTTPS isn&#39;t configured on the CloudFront distribution but is configured on the CloudFront origin. C. The origin on the CloudFront distribution is the wrong origin. D. The Origin Protocol Policy is set to Match Viewer and HTTPS isn&#39;t configured on the origin.

D. The Origin Protocol Policy is set to Match Viewer and HTTPS isn&#39;t configured on the origin. Answer - D Option A,B, and C are all incorrect because in these scenarios, the CloudFront returns HTTP status code 502 (Bad Gateway).Option D is CORRECT because this error occurs when the Origin Protocol Policy is set to Match Viewer but HTTPS isn&#39;t configured on the origin. For more information on CloudFront CDN please see the below link http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-502-bad-gateway.html#ssl-certificate-expired

You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you want to either Stop/Start, Reboot or Terminate the instance but you do not want to lose any data that you have stored on /dev/sda1. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1? Choose the correct option from the below: A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used B. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used. C. If you stop/start the instance the data will not be lost. However, if you either terminate or reboot the instance the data will be lost. D. The data in an instance store is not permanent - it persists only during the lifetime of the instance. The data will be lost if you terminate the instance. However, the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The data in an instance store is not permanent - it persists only during the lifetime of the instance. The data will be lost if you terminate the instance. However, the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral. Answer - D Since this is an EBS backed instance, it can be stopped and later restarted without affecting data stored in the attached volumes. By default, the root device volume for this instance will be deleted when the instance terminates. Option A is incorrect because upon termination, the volume would get deleted and the data would get lost (DeleteOnTermination setting is not mentioned, so this is a default case).Option B is incorrect because the data on EBS volume would not get lost upon stop/start or reboot.Option C is incorrect because the data on EBS volume would not get lost upon reboot.Option D is CORRECT because the data on EBS volume would not get lost upon stop/start or reboot as it is not ephemeral. Instance store, on the other hand, is an ephemeral storage and the data would get lost upon starting/stopping of the instance (This is an extra information given in the option, which may not be related to the question, but is correct nonetheless). More information on this topic: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

A user has launched an EC2 instance store-backed instance in the us-east-1a zone. The user created AMI #1 and copied it to the eu-west-1 region. After that, the user made a few updates to the application running in the us-east-1a zone. The user makes an AMI #2 after the changes. If the user launches a new instance in Europe from the AMI #1 copy, which of the below-mentioned statements is true? A. The new instance will have the changes made after the AMI copy as AWS just copies the reference of the original AMI during the copying. Thus, the copied AMI will have all the updated data. B. The new instance will have the changes made after the AMI copy since AWS keeps updating the AMI. C. It is not possible to copy the instance store backed AMI from one region to another. D. The new instance in the eu-west-1 region will not have the changes made after the AMI copy.

D. The new instance in the eu-west-1 region will not have the changes made after the AMI copy. Answer - D Option A is incorrect because (a) the changes made to the instance will not automatically get updated in the AMI in US-East-1, and (b) the already copied AMI will not have any reference to the AMI in the US-East-1 region.Option B is incorrect because AWS does not automatically update the AMIs. It needs to be done manually.Option C is incorrect because you can copy the instance store AMI between different regions.Option D is CORRECT because the instance in the EU region will not have any changes made after copying the AMI. You will need to copy the AMI#2 to eu-west-1 and then launch the instance again to have all the changes. For the entire details to copy AMI's, please visit the link - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

<p>An organization (Account ID 123412341234). has attached the below-mentioned IAM policy to a user. What does this policy statement entitle the user to perform? </p> <pre class="prettyprint linenums">{ "Version": "2012-10-17", "Statement": [{ "Sid": "AllowUsersAllActionsForCredentials", "Effect": "Allow", "Action": [ "iam:*LoginProfile", "iam:*AccessKey*", "iam:*SigningCertificate*" ], "Resource": ["arn:aws:iam::123412341234:user/${aws:username}"] } ] }</pre> <p><br></p> A. The policy allows the IAM user to modify all IAM user's credentials using the console, SDK, CLI or APIs. B. The policy will give an invalid resource error. C. The policy allows the IAM user to modify all credentials using only the console. D. The policy allows the user to modify the IAM user's password, sign in certificates and access keys only.

D. The policy allows the user to modify the IAM user's password, sign in certificates and access keys only. Answer - D First, in order to give a user a certain set of policies, you need to mention the following line. The aws:username will apply to the AWS logged in user. Resource&quot;: &quot;arn:aws:iam::account-id-without-hyphens:user/${aws:username} Next, the policies will give the permissions to modify the IAM user's password, sign in certificates and access keys "iam:*LoginProfile", "iam:*AccessKey*", "iam:*SigningCertificate*" For information on IAM security policies, please visit the link: http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

A user is using CloudFormation to launch an EC2 instance and then planning to configure an application after the instance is launched. The user wants the stack creation of ELB and AutoScaling to wait until the EC2 instance is launched and configured properly. How can the user configure this? A. It is not possible that the stack creation will wait until one service is created and launched. B. The user can use the HoldCondition resource to wait for the creation of the other dependent resources. C. The user can use the DependentCondition resource to hold the creation of the other dependent resources. D. The user can use the WaitCondition resource to hold the creation of the other dependent resources.

D. The user can use the WaitCondition resource to hold the creation of the other dependent resources. Answer - D You can use a wait condition for situations like the following: To coordinate stack resource creation with configuration actions that are external to the stack creation To track the status of a configuration process For more information on Cloudformation Wait condition please visit the link http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-waitcondition.html

You are an architect for a new sharing mobile application. Anywhere in the world, your users can see local news on topics they chose. They can post pictures and videos from inside the application. Since the application is being used on a mobile phone, connection stability is required for uploading content and delivery should be quick. Content is accessed a lot in the first minutes after it has been posted but is quickly replaced by new content before disappearing. The local nature of the news means that 90% of the uploaded content is then read locally. What solution will optimize the user experience when users upload and view content (by minimizing page load times and minimizing upload times)? A. Upload and store in S3, and use CloudFront. B. Upload and store in S3 in the region closest to the user and then use multiple distributions of CloudFront. C. Upload to EC2 in regions closer to the user, send content to S3, use CloudFront. D. Use CloudFront for uploading the content to S3 bucket and for content delivery.

D. Use CloudFront for uploading the content to S3 bucket and for content delivery. Answer - D Option A is incorrect because, even though it is a workable solution, a better approach is to use CloudFront for both uploading as well as distributing the content (not just distributing) - which is done in option D.Option B and C are both incorrect because you do not need to upload the content to the source that is closer to the user. CloudFront will take care of that.Option D is CORRECT because it uses CloudFront for both uploading as well as distributing the content (not just distributing) which is the most efficient use of the service. For more information on Cloudfront please refer to the below URL http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html

Your company is migrating infrastructure to AWS. A large number of developers and administrators will need to control this infrastructure using the AWS Management Console. The Identity Management team is objecting to creating an entirely new directory of IAM users for all employees, and the employees are reluctant to commit yet another password to memory. Which of the following will satisfy both these stakeholders? A. Users sign in using an OpenID Connect (OIDC) compatible IdP, receive an authentication token, then use that token to log in to the AWS Management Console. B. Users log in directly to the AWS Management Console using the credentials from your on-premises Kerberos compliant Identity provider. C. Users log in to the AWS Management Console using the AWS Command Line Interface. D. Users request a SAML assertion from your on-premises SAML 2.0-compliant identity provider (IdP) and use that assertion to obtain federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

D. Users request a SAML assertion from your on-premises SAML 2.0-compliant identity provider (IdP) and use that assertion to obtain federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. Answer - D Option A is incorrect because, although it is a workable solution, the users need not use the OpenID IdP (such as Facebook, Google, SalesForce etc.)in this scenario as they can use the on-premises 2.0 SAML compliant IdP and get the federated access to the AWS. Access via OpenID IdP is most suitable for the mobile apps.Option B is incorrect because you cannot login to AWS using the IdP provided credentials. You need temporary credentials provided by Security Token Service (STS) for that.Option C is incorrect because you should avoid using Access Key and Secret Key for the login. This is the least secure way to login.Option D is CORRECT because it uses the on-premises 2.0 SAML compliant IdP and get the federated access to the AWS, thus avoiding creating any IAM User for the employees in the organization. For more information on SAML Authentication in AWS, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html


Set pelajaran terkait

Module 3 Reading and Learning to Read Chapter 8 Terms

View Set