AWS Architect Professional (w multiple choices) from bassthumper 69Q

Ace your homework & exams now with Quizwiz!

- The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. (Authenticates with LDAP and calls the AssumeRole) - Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. (Custom Identity broker implementation, with authentication with LDAP and using federated token)

A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers) [PROFESSIONAL]

B. Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy. (Different policy with deny rules based on location, device and more restrictive wins)

A customer is in the process of deploying multiple applications to AWS that are owned and operated by different development teams. Each development team maintains the authorization of its users independently from other teams. The customer's information security team would like to be able to delegate use authoriztion to the individual development teams but independently apply restrictions to the users permissions based on factor such as the user's device and location. For example, the information security team would like to grant read-only permissions to a user who defined by the development team as read/write whenever the user is authenticating from outside the corporate network. What steps can the information security team take to implement this capability? A. Operate an authentication service that generates AWS Security Token Service (STS) tokens with IAM policies from application-defined IAM roles. B. Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy. C. Configure IAM policies that restrict modification of the application IAM roles only to the information security team. D. Enable federation with the internal LDAP directory and grant the application teams permissions to modify users.

B. Asynchronous replication

A customer is running an application in US-West (northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in achieving a low Recovery Point Objective (RPO) for an Amazon Relational Database Service (RDS) multi-AZ MySQL database instance. Which approach is best suited to this need? A) Synchronous replication B) Asynchronous replication C) Route53 health checks D) Copying of RDS incremental snapshots

D. A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

A development team that is currently doing a nightly six-hour build which is lengthening over time on -premises with a large and mostly underutilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security, and how to integrate with existing on-premises applications such as their LDAP and email servers which cannot move off-premises. The development environment needs a source code repository, a project management system with a MySQL database, resources for performing builds, and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team;s requirements? A) A Bastion host Amazon EC2 instance running a VPN server for access from on-premises, Amazon EC2 for the source code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIP for the source code repository and project management system, Amazon SQL for a build queue, An Amazon Auto Scaling group of Amazon EC2 instances for performing builds and Amazon Simple Email Service for sending the build output. B) An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon Simple Notification Serive for a notification initiated build, An Auto Scaling group of Amazon EC2 instances for performing builds and Amazon S3 for the build output. C) An AWS Storage Gateway for connecting on-premises software applications with cloud-based storage securely, Amazon EC2 for the resource code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, Amazon SQS for a build queue, An Amazon Elastic MapReduce (EMR) cluster of Amazon EC2 instances for performing builds and Amazon CloudFront for the build output. D) A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output.

A. The company should utilize an amazon simple work flow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholds.

A document storage company is deploying their application to AWS and changing their business model to support both Free Tier and Premium Tier users. The Premium Tier users will be allowed to store up to 200GB of data and Free Tier customers will allowed to store only 5GB. The customer expects that billions of files will be stored. All user need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use. To support the Free Tier and Premium TIer users, how should they architect their application? A The company should utilize an amazon simple work flow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholds. B The company should deploy an amazon relational data base service relational database with a store objects table that has a row for each stored object along with size of each object. The upload server will query the aggregate consumption of the user in questions ( by tfirst determingng the files store by the user, and then querying the stored objects table for respective file sizes) and send an email via amazon simple email service if the thresholds are breached. C The company should write both the content length and the username of the files owner as S3 metadata for the object. They should then create a file watcher to iterate over each object and aggregate the size for each user and send a notification via amazon simple queue service to an emailing service if the storage threshold is exceeded. D The company should create two separated amazon simple storage service buckets one for data storage for free tier users and another for data storage for premium tier users. An amazon simple workflow service activity worker will query all objects for a given user based on the bucket the data is stored in and aggregate storage. The activity worker will notify the user via amazon simple Notification Service when necessary

A. Define a deletion policy of type Retain for the Amazon RDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack. B. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted.

A gaming company adopted AWS CoudFormation to automate load-testing of their games. They have created an AWS CloudFormation template for each gaming environment and one for the load-testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP Requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS CloudFormation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis. Select possible solutions that allow access to the test results after the AWS CloudFormation load-testing stack is detected. Choose 2 answers A. Define a deletion policy of type Retain for the Amazon RDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack. B. Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted. C. Define automated backups with a backup retention period of 30 days for the Amazon RDS database and perform point -in -time recovery of the database after the AWS CloudFormation stack is deleted. D. Define an Amazon RDS Read -Replica in the load -testing AWS CloudFormation stack and define a dependency relation between master and replica via the DependsOn attribute. E. Define an update policy to prevent deletion of the Amazon RDS database after the AWS CloudFormation stack is deleted.

A. Network stack updates will fail upon attempts to delete a subnet with EC2 instances (Subnets cannot be deleted with instances in them) D. Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group (IAM permissions need to be given explicitly to launch instances )

A large enterprise wants to adopt CoudFormation to automate administrative tasks and implement the security principles of least priviledge and separation of duties. They have identified the following roles with the corresponding tasks in the company: - network administrators: create, modify and detele VPCs, subnets, NACLs, routing tables, and security groups - application operators: deploy complete application stacks (ELB, Auto-Scaling groups, RDS) whereas all resources must be deployed in the VPCs managed by the network administrators Both groups must maintain their own CloudFormation templates and should be able to create, update and delete only their own CloudFormation stacks. The company has followed your advice to create two IAMP groups, one for applications and one for networks. Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update and deletion of CloudFormation stacks. Given setup and requirements, which statements represents valid design considerations? Choose 2 answers A. Network stack updates will fail upon attempts to delete a subnet with EC2 instances B. Unless resource level permissions are used on the cloudformation: DeleteStack action, network administrators could tear down application stacks C. The application stack cannot be deleted before all network stacks are deleted D Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group E Nesting network stacks within application stacks simplifies management and debugging, but requires resource level permissions in the IAM policy of the network group

C. Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource. E. Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.

A marketing research company has developed a tracking system that collects user behavior during web marketing campaigns on behalf of their customers all over the world. The tracking system consists of an auto-scaled group of Amazon Elastic Compute Cloud (EC2) instances behind an elastic load balancer (ELB), and the collected data is stored in Amazon DynamoDB. After the campain is terminated, the tracking system is torn down and the data is moved to Amazon Redshift, where it is aggregated, analyzed and used to generate detailed reports. The company wants to be able to instantiate new tracking systems in any region without any manual intervention and therefore adopted AWS CloudFormation. What need to be done to make sure that the AWS CloudFormation template works in every AWS region? Choose 2 answers A. IAM users with the right to start AWS CloudFormation stacks must be defined for every target region. B. The names of the Amazon DynamoDB tables must be different in every target region. C. Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource. D. Avoid using DeletionPolicies for EBS snapshots. E. Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.

A. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and return the content S3 URL unless the download limit is reached. C. Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in RDS, and return a dynamically signed URL unless the download limit is reached.

A media production company wants to deliver high-definition raw video material for preproduction and dubbing to customers all around the world. They would like to use Amazon CloudFront for their scenario, and they require the ability to limit downloads per customer and video file to a configurable number. A CloudFront download distribution with TTL = 0 was already setup to make sure all client HTTP request hit an authentication backent on Amazon Elastic Compute Cloud (EC2)/Amazon Relational Database Service (RDS) first, which is responsible for restricting the number of downloads. content is stores in Amazon Simple Storage (S3) and configured to be accessible only via CloudFront. What else needs to be done to achieve an architecture that meets the requirement? Choose 2 answers A. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in Amazon RDS, and invalidate the CloudFront distribution as soon as the download limit is reached. B. Enable CloudFront logging into an Amazon S3 bucket, let the authentication backend determine the number of downloads per customer by parsing those logs, and return the content S3 URL unless the download limit is reached. C. Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in Amazon RDS, and return a dynamically signed URL unless the download limit is reached. D. Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in Amazon RDS, and return the content S3 URL unless the download limit is reached. E. Enable CloudFront logging into an Amazon S3 bucket, leverage Amazon Elastic MapReduce (EMR) to analyze CloudFront logs to determine the number of downloads per customer, and return the content S3 URL unless the download limit is reached.

A. Deploy an Amazon CloudFront distribution in front of the Amazon S3 tiles bucket. C. Decrease the size (width/height) of the individual tiles at the maximum zoom level. E. Use Amazon 53 Reduced Redundancy Storage for each zoom level.

A public archives organization is about to move a pilot application they are running on AWS into production. You have been hired to analyze their application architecture and give cost-saving recommendations. The application displays scanned historical documents. Each documents is split into individual image tiles at multiple zoom levels to improve responsiveness and ease of use for the end users. At maximum zoom level the average document will be 8000x6000 pixels in size, split into multiple 40px x 40px image tiles. The tiles are batch processed by Amazon Elastic Compute Cloud (EC2) instances and put into an Amazon Simple Storeage Service (S3) bucket. A browser-based JavaScript viewer fetches tiles from the Amazon (S3) bucket and displays them to users as thay zoom and pan around each document. The average storage size of all zoom levels for a document is approvizmately 30MB of JPEG tiles. Originals of each document are archived in Amazon Glacier. The company expects to process and host over 500.000 scanned documents in the first year. What are your recommendations? Choose 3 answers (A) Deploy an Amazon CloudFront distribution in front of the Amazon S3 tiles bucket. (B) Increase the size (width/height) of the individual tiles at the maximum zoom level. (C) Decrease the size (width/height) of the individual tiles at the maximum zoom level. (D) Store the maximum zoom level in the low cost Amazon S3 Glacier option and only retrieve the most frequently access tiles as they are requested by users. (E) Use Amazon S3 Reduced Redundancy Storage for each zoom level.

A. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.

A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon Elastic Compute Cloud (EC2) instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon Relational Database Service (RDS) Oracle instance. Which option will help save the most money while meeting requirements? A. Store ingest and output files in Amazon S3. Deploy on-demand for the master, and core nodes and spot for the task nodes. B. Optimize by deploying a combination of on-demand, RI, and spot-pricing models for the master, core, and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier. C. Store the ingest files in Amazon S3 RRS and store the output files in 53. Deploy Reserved Instances for the master, and core nodes and on -demand for the task nodes. D. Deploy on -demand master, core and task nodes and store ingest and output files in Amazon Simple Storage Service (S3) Reduced Redundancy Storage (RRS).

C. Re-configure the load-testing software to re-resolve DNS for each web request. (Refer link) E. Use a third-party load-testing service which offers globally distributed test clients. (Refer link)

A startup deploys its photo-sharing site in a VPC. An elastic load balancer sitributes web traffic across two subnets. The load balancer session stickiness is configured to use the AWS-generated session cookie, with a session TTL of 5 minutes. The web server Auto Scaling group is configured as min-size=4, max-size=4. The startup is preparing for a public launch, by running load-testing software installed on a single Amazon Elastic Compute Cloud (EC2) instance running in us-west-2a. After 60 minutes of load-testing, the web server logs show the following: +----------------------+-------------------------+ | # of HTTP requests | # of HTTP requests | WEBSERVER LOGS | from load-tester | from private beta users | +---------------------------------------|----------------------|-------------------------+ | webserver #1 (subnet in us-west-2a): | 19,210 | 434 | | webserver #2 (subnet in us-west-2a): | 21,790 | 490 | | webserver #3 (subnet in us-west-2b): | 0 | 410 | | webserver #4 (subnet in us-west-2b): | 0 | 428 | +---------------------------------------+----------------------+-------------------------+ Which recommendations can help ensure that load-testing HTTP request are evenly distributed across the four webservers? Choose 2 answers A. Launch and run the load-tester Amazon EC2 instance from us-east-1 instead. B. Configure Elastic Load Balancing session stickiness to use the app-specific session cookie. C. Re-configure the load-testing software to re-resolve DNS for each web request. D. Configure Elastic Load Balancing and Auto Scaling to distribute across us-west-2a and us-west-2b. E. Use a third-party load-testing service which offers globally distributed test clients.

D. One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)

A utility company is building an application that stores data coming from more than 10.000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and would like to delete all data that is order than four weeks. Using Amazon DynamoDB for its scalability and rapidity, how would you implement this in the most most-effective way? A) One table, with a primary key that is the sensor ID and a hash key that is the timestamp B) One table, with a primary key that is the concatenation of the sensor ID and timestamp C) One table for each week, with a primary key that is the concatenation of the sensor ID and timestamp D) One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp

Create one AWS Ops Works stack, create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way?

B. Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3 (Controlled access and admins cannot access the data as it needs authentication)

An AWS customer is deploying a web application that is composed of a front-end running on Amazon EC2 and confidential data that is stored on Amazon S3. The customers security policy requires that the all access operations to this sensitive data must be authenticated and authorized by centralized access managements system that is operated by separate security team. In addition, the a web application team that owns and administrators the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system. Which of the following configuration will support these requirements: A) Encrypt the data on Amazon S3 using a CloudHSM that is operated by the seperate security team. Configure the web application to integrate with the CloudHSM for decrypting approved data access operations for trusted end-users. B) Configure the web application to authenticate end-users against the centralized access management system.Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3. C) Have the seperate security team create an IAM role that is entitled to access the data on Amazon S3. Have the web application team provision their instances with this Role while denying their IAM users to the data on Amazon S3 D) Configure the web application to authenticate end-users against the centralized access management system using SAML. Have the end-users authenticate to IAM using their SAML token and download the approved data directly from Amazon S3.

B. Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels, they could then have two domain controller instances that are joined to their existing domain and reside within different subnets in different availability zones.

An Enterprise customer is starting their migration to the cloud, their main reason for migrating is agility, and thay want to make their internal Micosoft Active Directory available to nay application running on AWS; this is to internal users only have to remember one set of credentials and as a central point of user control for leavers and joiners How could they make their Active Directory secure, and highly available, with minimal on-premises infrastructure changes, in the most cost and time-efficient way? A) Using Amazon EC2, they could create a DMZ using a security group, within the security group they could provision two smaller Amazon EC2 instances that are running Openswan for resilient IPSEC tunnels and two larger instances that are domain controllers, they would use multiple availability zones. B) Using VPC, they could create an extension to their data center and make use of reilient hardware IPSEC tunnels, they could then have two domain controller instances that are joined to their existing domain and reside within different subnets in different availability zones. C) Within the customer's existing infrastructure, they could provision new hardware to run active directory federation services, this would present active directory as a SAML2 endpoing on the internet and any new application on AWS could be written to authenticate using SAML2 D) The customer could create a stand alone VPC with its own active directory domain controllers, two domain controller instances could be configured, one in each availability zone, new applications would authenticate with those domain controllers.

Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.

An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials? [PROFESSIONAL]

Configure the table to have a range index on the name attribute, and a hash index on the office identifier

An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. "Return all Items in this office for names starting with A through E". Which table configuration will result in the lowest impact on provisioned throughput for this query? [PROFESSIONAL]

Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise's account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions? [PROFESSIONAL]

B. Launch two Windows Server 2008 R2 instances in us-west-1b and two in us-west-1a. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi-AZ MySQL Amazon RDS instance in us-west-2a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Route 53 and create an alias record pointing to the elastic load balancer. (Although RDS instance is in a different region which will impact performance, this is the only option that works.)

Company A has hired you to assist with the migration of an interactive website that allows registered users to rate local restaurants. Updates to the ratings are displayed on the home page, and ratings are updated in real time. Although the website is not very popular today, the company anticipates that is will grow rapidly over the next few weeks. They want the site to be highly available. The current architecture consists of a single Windows Server 2008 R2 web server and a MySQL database running on Linux. Both reside inside an on-premises hypervisor. What would be the most efficient way to transer the application to AWS, ensuring performance and high-availability? A. Export web files to an Amazon S3 bucket in us -west -1. Run the website directly out of Amazon S3. Launch a multi -AZ MySQL Amazon RDS instance in us -west -1a. Import the data into Amazon RDS from the latest MySQL backup. Use Route 53 and create an alias record pointing to the elastic load balancer. B. Launch two Windows Server 2008 R2 instances in us -west -lb and two in Us -west -1a. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi -AZ MySQL Amazon RDS instance In us -west -2a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Route 53 and create an alias record pointing to the elastic load balancer. C. Use AWS VM Import/Export to create an Amazon Elastic Compute Cloud (EC2) Amazon Machine Image (AMI) of the web server. Configure Auto Scaling to launch two web servers in us -west -la and two in us-est-1b. Launch a Multi -AZ MySQL Amazon Relational Database Service (RDS) instance in us -west -lb. Import the data into Amazon RDS from the latest MySQL backup. Use Amazon Route_53 to create a hosted zone and point an A record to the elastic load balancer. D. Use AWS VM Import/Export to create an Amazon EC2 AMI of the web server. Configure auto -scaling to launch two web servers in us -west -1a and two in us -west -1b. Launch a multi -AZ MySQL Amazon RDS instance in us -west -1a. Import the data Into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.

A. A web tier deployed across 2 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and a Multi-AZ RDS (Relational Database Service) deployment. (As it needs Fault Tolerance with minimal 6 servers always available)

For a 3-tier, customer facing, inclement weather site utilizing a MySQL database running in a Region which has two AZs (Availability Zone), which architecture provides fault tolerance within the Region for the application that minimally requires 6 web tier servers and 6 application tier servers running in the web and application tiers and one MySQL database? A) A web tier deployed across 2 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and a Multi-AZ RDS (Relational Database Service) deployment. B) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and a Multi-AZ RDS (Relational Database Service) deployment. C) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the other AZs. D). A web tier deployed across 1 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed in the same AZs with 6 EC2 instances inside an Auto scaling group behind an ELB and a Multi-AZ RDS (Relational Database services) deployment, with 6 stopped web tier EC2 instances and 6 stopped application tier EC2 instances all in the other AZ ready to be started if any of the running instances in the first AZ fails.

C. Use 3rd-party CA certificate in the origin and CloudFront default certificate in CloudFront D. Use 3rd-party CA certificate in both origin and CloudFront

To enable end-end HTTPS connections from the user's browser to the origin via CloudFront, which of the following options be valid? Choose two answers A) Use self signed certificate in the origin and cloudfront default certificate in cloudfront. B) Use the cloudfront default certificate in both origin and cloudfront C) Use 3rd-party CA certificate in the origin and cloudfront default certificate in cloudfront D) Use 3rd-party CA certificate in both origin and cloudfront E) Use a self signed certificate in both the origin and cloudfront

D. First, compress and then concatenate all files for a completed drug trial test into a single Amazon Glacier archive. Store the associated byte ranges for the compressed files along with other search metadata in an Amazon RDS database with regular snapshotting. When restoring data, query the database for files that match the search criteria, and create restored files from the retrieved byte ranges.

To meet regulatory requirements, a pharmaceuticals company needs to archive data after a drug trial test concluded. Each drug trial test may generate up to several thousands of files, with conpressed file sizes ranging from 1 byte to 100MB. Once archived, data rarely needs to be restored, and on the rare occasion when restoration needed, the compnay has 24 hours to restore specific files that bmatch certain metadata. Searches must be possible by numeris file ID, drug name, participant names, date ranges, and other metadata. Which is the most-effective architectural approach that can meet the requirements? A. Store individual files in Amazon Glacier, using the file ID as the archive name. When restoring data, query the Amazon Glacier vault for files matching the search criteria. B. Store individual files in Amazon S3, and store search metadata in an Amazon Relational Database Service (RDS) multi - AZ database. Create a lifecycle rule to move the data to Amazon Glacier after a certain number of days. When restoring data, query the Amazon RDS database for files matching the search criteria, and move the files matching the search criteria back to S3 Standard class. C. Store individual files in Amazon Glacier, and store the search metadata in an Amazon RDS multi -AZ database When restoring data, query the Amazon RDS database for files matching the search criteria, and retrieve the archive name that matches the file ID returned from the database query.. D. First, compress and then concatenate all files for a completed drug trial test into a single Amazon Glacier archive. Store the associated byte ranges for the compressed files along with other search metadata in an Amazon RDS database with regular snapshotting. When restoring data, query the database for files that match the searcn criteria, and create restored files from the retrieved byte ranges. E. Store individual compressed files and search metadata in Amazon Simple Storage Service (S3). Create a lifecycle rule to move the data to Amazon Glacier, after a certain number of days. When restoring data, query the Amazon S3 bucket for files matching the search criteria, and retrieve the file to S3 reduced redundancy in order to move it back to S3 Standard class.

A. 2,4,5 and 6

When deploying a highly available 2-tier web application on AWS, which combination of AWS Services meets the requirements? 1. AWS Direct Connect 2. Amazon Route 53 3. AWS Storage Geteway 4. Elastic Load Balancing 5. Amazon EC2 6. Auto Scaling 7. Amazon VPC 8. AWS Cloud Trail A) 2,4,5 and 6 B) 3,4,5 and 8 C) 1 thru 8 D) 1,3,5 and 7 E) 1,2,5 and 6

- RDS - Amazon Redshift

With which AWS services CloudHSM can be used (select 2)

D. Use an Amazon Cloud Front distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

You are an architect for a news-sharing mobile application. Anywhere in the world, your users can see local news on topics they choose. They can post picture and videos from inside the application. Since the application is being used on a mobile phone, connection stability is required for uploading content, and delivery should be quick. Content is accessed a lot in the first minutes after it has been posted, but is quickly replaced by new content before disappearing. The local nature of the news means that 90 percent of the uploaded content i then read locally )less than a hundred kilometers from where it was posted). What solution will optimize the user experience when users upload and view content (by minimizing page load times and minimizing upload times)? A. Upload and store the content in a central Amazon Simple Storage Service (S3) bucket, and use an Amazon Cloud Front Distribution for content delivery. B. Upload and store the content in an Amazon Simple Storage Service (S3) bucket in the region closest to the user, and use multiple Amazon Cloud Front distributions for content delivery. C. Upload the content to an Amazon Elastic Compute Cloud (EC2) instance in the region closest to the user, send the content to a central Amazon Simple Storage Service (S3) bucket, and use an Amazon Cloud Front distribution for content delivery. D. Use an Amazon Cloud Front distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

B. Place all your web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name. E. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.

You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? Choose 2 answers A. Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT Instance public IP address. B. Place all your Web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name. C. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution. D. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP. E. Assign EIPs to all Web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover.

- Data encryption across the Internet - Protection of data in transit over the Internet - Peer identity authentication between VPN gateway and customer gateway - Data integrity protection across the Internet

You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server's on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers)

Store all files in Amazon 53. Create Amazon DynamoDB tables for the corresponding key -value pairs on the associated metadata, when objects are uploaded.

You are designing a file -sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users?

B. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storeage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you archive all of these goals in a way that is economical and can scale to millions of users. A. Store all files in Amazon Simple Storage Service (53). Create a bucket for each user. Store metadata in the filename of each object, and access it with LIST commands against the S3 API. B. Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key -value pairs on the associated metadata, when objects are uploaded. C. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Use a database running in Amazon Relational Database Service (RDS) to store the metadata. D. Create a striped set of 4000 IOPS Elastic Load Balancing volumes to store the data. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

Record the user's Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service 'AssumeRole' function. Store these credentials in the mobile app's memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.

You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application? [PROFESSIONAL]

2 servers in each of AZ's a through e, inclusive. (You need to design for N+1 redundancy on Availability Zones. ZONE_COUNT = (REQUIRED_INSTANCES / INSTANCE_COUNT_PER_ZONE) + 1. To minimize cost, spread the instances across as many possible zones as you can. By using a though e, you are allocating 5 zones. Using 2 instances, you have 10 total instances. If a single zone fails, you have 4 zones left, with 2 instances each, for a total of 8 instances. By spreading out as much as possible, you have increased cost by only 25% and significantly de-risked an availability zone failure. Refer link)

You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive

Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. (Cost effective and can be in private subnet as well)

You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? [PROFESSIONAL]

B. Configure you instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication C. Configure a security group for every zone. Configure allow rules only between zone that need to be able to communicate with one another. Use implicit deny all rule to block any other traffic

You are designing security inside your VPC. You are considering the options for establishing separate security zones, and enforcing network traffic rules across the different zones to limit which instances can communicate. How would you accomplish these requirements? Choose 2 answers A.Configure a security group for every zone. Configure a default allow all rule. Configure explicit deny rules for the zones that shouldn't be able to communicate with one another B.Configure you instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication C.Configure a security group for every zone. Configure allow rules only between zone that need to be able to communicate with one another. Use implicit deny all rule to block any other traffic D.Configure multiple subnets in your VPC, one for each zone. Configure routing within your VPC in such a way that .each subnet only has routes to other subnets with which it needs to communicate, and doesn't have routes to subnets with which it shouldn't be able to communicate.

Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access

You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal. [PROFESSIONAL]

A. Migrate the local database into multi-AWS RDS database. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks

You are moving an existing traditional system to AWS, and during the migration discover that there is a master server which is a single point of failure. Having examined the implementation of the master server you realise there is not enough time during migration to re-engineer it to be highly available, though you do discover that it stores its state in a local MySQL database. In order to minimize down-time you select RD5 replace the local database and configure master to use it, what steps would best allow you to create a seft-healing architecture. A. Migrate the local database into multi-AWS RDS database. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks. B. Replicate the local database into a RDS read replica. Place master node into a Cross-Zone ELB with a minimum of one and maximum of one with health checks. C. Migrate the local database into multi-AWS RDS database. Place master node into a Cross-Zone ELB with a minimum of one and maximum of one with health checks. D. Replicate the local database into a RDS read replica. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks.

4. You have locked port 22 down to your specific IP address therefore users cannot access your site using HTTP/HTTPS

You are putting together a WordPress site for a local charity and you are using a combination of Route53, Elastic Load Balancers, EC2 & RDS. You launch your EC2 instance, download WordPress and setup the configuration files connection string so that it can communicate to RDS. When you browse to your URL however, nothing happens. Which of the following could NOT be the cause of this 1) You have forgotten to open port 80/443 on your security group in which the EC2 instance is placed 2) Your elastic load balancer has a health check which is checking a webpage that does not exist, therefore your EC2 instance is not in service. 3) You have not configured an ALIAS for your A record to point to your elastic load balancer 4) You have locked port 22 down to your specific IP address therefore users cannot access your site using HTTP/HTTPS.

Create a new stack within Opsworks add the appropriate layers to the stack and deploy the application

You are tasked with the migration of a highly trafficked node.js application to AWS. In order to comply with organizational standards Chef recipes must be used to configure the application servers that host this application and to support application lifecycle events. Which deployment option meets these requirements while minimizing administrative burden?

AWS OpsWorks

You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

You have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL. Before generating the URL, the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely? A) Use the AWS account access Keys the application retrieves the credentials from the source code of the application. B) Create a IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user's credentials from the EC2 instance user data. C) Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata D) Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.

C. Provision a VPN connection between a VPC and existing on -premises equipment, submit a DirectConnect partner request to provision cross connects between your data center and the DirectConnect location, then cut over from the VPN connection to one or more DirectConnect connections as needed.

You have been asked design network connectivity between your existing data centers and AWS. Your application's EC2 instances must be able to connect to existing backend resources located in your data center. Network traffic between AWS and your data centers will start small, but ramp up to 10s of GB per second over the course of serveral months. The success of your application is dependents upon getting to market quickly. Which of the following design options will allow you to meet your objectives? A. Quickly create an internal ELB for your backend applications, submit a DirectConnect request to provision a 1 Gbps cross connect between your data center and VPC, then increase the number or size of your DirectConnect connections as needed. B. Allocate EIPs and an Internet Gateway for your VPC instances to use for quick, temporary access to your backend applications, then provision a VPN connection between a VPC and existing on -premises equipment. C. Provision a VPN connection between a VPC and existing on -premises equipment, submit a DirectConnect partner request to provision cross connects between your data center and the DirectConnect location, then cut over from the VPN connection to one or more DirectConnect connections as needed. D. Quickly submit a DirectConnect request to provision a 1 Gbps cross connect between your data center and VPC, then increase the number or size of your DirectConnect connections as needed.

Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. (Only CloudFormation's JSON Templates allow declarative version control of repeatedly deployable models of entire AWS clouds. Refer link)

You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines? [PROFESSIONAL]

B. Add another CGW in a different data center and create another dual-tunnel VPN connection. (Refer link)

You have been asked to virtually extend two existing data centers into AWS to support a highly available application that depends on existing, on-premises resources located in multiple data centers and static content that is served from an Amazon Simple Storeage Service (S3) bucket. Your design current includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should condider changing to make the solution more highly available? A. Add another VGW in a different Availability Zone and create another dual-tunnel VPN connection. B. Add another CGW in a different data center and create another dual-tunnel VPN connection. C. Add a second VGW in a different Availability Zone, and a CGW in a different data center, and create another dual-tunnel. D. No changes are necessary: the network architecture is currently highly available.

Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? [PROFESSIONAL]

Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda. (Refer link)

You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge? [PROFESSIONAL]

A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. (Use DynamoDB cross-regional replication version with two ELBs and ASGs with Route53 Failover and Latency DNS. Refer link)

You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach? a. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. b. Set up a DynamoDB global table. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. c. Set up a DynamoDB global table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB. d. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.

Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. (Because the instance will always be online during the day, in a predictable manner, and there are sequences of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "Full" level utilization purchases on EC2.)

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost? [PROFESSIONAL]

B. Use AWS OpsWorks auto healing for both the front'end and back-end instance pair. C. Use Elastic Load Balancing in front of the front-end subsystem and Auto scaling to keep the specified number of instances D. Use Elastic Load Balancing in front of the back-end subsystem and Auto scaling to keep the specified number of instances

You tried to integrate two subsystems (front-end and back-end) with an HTTP interface to one large system. These subsystems don't store any state inside. All state is stored in an Amazon DynamoDB table. You have launched each of these two subsystems from a separate AMI. Black box testing has shown that these servers have stopped running and are issuing malformed requests that do not meet HTTP specifications from the clients. Your developers have discover and fixed this issue, and you deploy the fix to the two subsystems as soon as possible without service disruption. What are the most effective options to deploy the fixes? A) Use VPC B) Use AWS OpsWorks auto healing for both the front'end and back-end instance pair. C) Use Elastic Load Balancing in front of the front-end subsystem and Auto scaling to keep the specified number of instances. D) Use Elastic Load Balancing in front of the back-end subsystem and Auto scaling to keep the specified number of instances. E) Use Amazon CloudFront which accesses the front-end server when origin fetch F) Use Amazon SQS between the front-end and bank-end subsystems.

D. Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)

You've been tasked with moving an e-commerce web application from a customer's datacenter into a VPC. The application must be fault tolerant and well as highly scalable. Moreover, the customer is adamant that service interruptions not affect the user experience. As you near launch, you discover that the application currently uses multicast to share session state between web servers. in order to handle session state within the VPC, you choose to: A. Store session state in Amazon Relational Database Service. B. Enable session stickiness via Elastic Load Balancing. C. Create a mesh VPN between instances and allow multicast on it. D. Store session state in Amazon ElastiCache for Redis.

Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests, which can be served late. (CloudFront can server request from cache and multiple cache behavior can be defined based on rules for a given URL pattern based on file extensions, file names, or any portion of a URL. Each cache behavior can include the CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.)

Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest and cheapest way to reduce costs and scale with spikes like this? [PROFESSIONAL]

- An IAM trust policy allows the EC2 instance to assume an EC2 instance role. - An IAM access policy allows the EC2 role to access S3 objects.

Your application is leveraging IAM Roles for EC2 for accessing object stored in S3. Which two of the following IAM policies control access to you S3 objects.

A. An IAM trust policy allows the EC2 instance to assume an EC2 instance role. B. An IAM access policy allows the EC2 role to access S3 objects

Your application leveraging IAM Roles for EC2 accessing objects stored in S3. Which two of the following IAM policies control access to your S3 objects? A. An IAM trust policy allows the EC2 instance to assume an EC2 instance role. B. An IAM access policy allows the EC2 role to access S3 objects. C. An IAM bucket policy allowsthe EC2 role to access S3 objects. D. An IAM trust policy allows applications running on the EC2 instance to assume as EC2 role E An IAM trust policy allows applications running on the EC2 instance to access S3 objects.

A. Increase auto scaling capacity and scaling thresholds to allow the web-front to cost-effectively scale across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability. (Ideal for HA to reduce and distribute load)

Your company currently has a highly available web application running in production. The application's web front-end utilize an Elastic Load Balancer and Auto Scaling across three Availability Zones. During peak load, your web servers operate at 90% utilization and leverage a combination of Heavy Utilization Reserved Instances for steady state load and On-Demand and Spot Instances for peak load, You are tasked with designing a cost effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load. Which option provides the most most effective high availability architectural design for this application? A. Increase Auto Scaling capacity and scaling thresholds to allow the web front-end to cost effectively scale across all Availability Zones to lower aggregate utilization levels that will allow an Availability Zone to fail during peak load without affecting the application's availability. B Continue to run your web front-end at 900/o utilization, but purchase an appropriate number of light utilization RIs in each Availability Zone to cover the loss of any of the other Availability Zones during peak load. C. Continue to run your web front-end at 90°/o utilization, but leverage a high bid price strategy to cover the loss of any of the other Avapbility Zones during peak load. D. Increase use of spot instances to cost effectively scale the web front-end across all Availability Zones to lower aggregate utilization levels that will allow an Availability Zone to fail during peak load without affecting the application's availability.

C. Generate reports from a multi-AZ MySQL Amazon RDS deployment and have an offline task put reports in Amazon Simple Storage Service (S3) and use CloudFront to cache the content. Use a TTL to expire objects daily. (Offline task with S3 storage and CloudFront cache)

Your company has been contracted to develop and operate a website that tracks NBA basketball statistics. Statistical data to derive reports like "best game-winning shots from the regular season" and more frequently built reports like "top shots of the game" need to be stored durably for repeated lookup. Leveraging social media techniques, NBA fans submit and vote on new report types from the existing data set so the system needs to accommodate variability in data queries and new statis reports must be generated and posted daily. Initial research in the design phase indicates that there will be over 3 milion report queries on game day by end users and other application that use this application as a data source. It is expected that this system will gain in popularity over time and reach peaks of 10-15 milion report quireis of the system on game days. Select the answer that will allow your application to best meet these requirements while minimizing costs. A. Launch a multi-AZ MySQL Amazon Relational Database Service (RDS) Read Replica connected to your multi AZ master database and generate reports by querying the Read Replica. Perform a daily table cleanup. B. Implement a multi-AZ MySQL RDS deployment and have the application generate reports from Amazon ElastiCache for in-memory performance results. Utilize the default expire parameter for items in the cache. C. Generate reports from a multi-AZ MySQL Amazon RDS deployment and have an offline task put reports in Amazon Simple Storage Service (S3) and use CloudFront to cache the content. Use a TTL to expire objects daily. D. Query a multi-AZ MySQL RDS instance and store the results in a DynamoDB table. Generate reports from the DynamoDB table. Remove stale tables daily.

C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console. Which option below will meet the needs for your NOC members? A. Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console. B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console. C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. D. Use your on-premises SAML 2.0-compliant identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.

Users request a SAML assertion from your on-premises SAML 2.0-compliant identity provider (IdP) and use that assertion to obtain federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

Your company is migrating infrastructure to AWS. A large number of developers and administrators will need to control this infrastructure using the AWS Management Console. The Identity Management team is objecting to creating an entirely new directory of IAM users for all employees, and the employees are reluctant to commit yet another password to memory. Which of the following will satisfy both these stakeholders?

Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. (CloudFormation allows source controlled, declarative templates as the basis for stack automation and Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed)

Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements? [PROFESSIONAL]

B. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment) E. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

Your company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relation Database Service (RDS) database. You adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring application downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements? Choose 2 answers A. Assign a custom recipe to each layer which replaces the underlying AMI. Use AWS OpsWorks life-cycle events to incrementally execute this custom recipe and update the instances with the new AMI. B. Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layes, switch DNS to the new stack, and tear down the old stack. C. Identify all Amazon Elastic Compute Cloud (EC2) instances of your AWS OpsWorks stack, stop each instance, replace the AMI ID property with the ID of the latest Amazon Linux AMI ID, and restart the instance. To avoid downtime, make sure not more than one instance is stopped at the same time. D. Specify the latest Amazon Linux AMI as a custom AMI at the stack level, terminate instances of the stack and let AWS OpsWorks launch new instances with the new AMI. E. Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment

Your company runs a customer facing event registration site. This site is built with a 3-tier architecture with web and application tier servers and a MySQL database. The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?

B) Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances. (Spot instances allocated with duration of 1-6 hours won't get terminated when spot price changes making them ideal for fixed time jobs.)

Your company sells consumer devices and need to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activations, that is, for a few days there are 10 times or event 100 times activations than in the average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload? A) Amazon RDS and Amazon Elastic MapReduce with Spot instances. B) Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances. C) Amazon RDS and Amazon Elastic MapReduce with Reserved instances. D) Amazon DynamoDB and Amazon Elastic MapReduce with Reserved instances.

A. Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents.

Your customer is implementing a video on-demand streaming platform on AWS. The requirements are; support for multiple devices such as iOS, Android, and PC as client devices, using a standard client player, using streaming technology (not download,) and scalable architecture with cost effectiveness. Which architecture meets the requirements? A) Store the video contents to Amazon S3 as an origin server. Configure the Amazon Cloudfront distribution with a streaming option to stream the video contents. B) Store the video contents to Amazon S3 as an origin server. Configure the Amazon Cloudfront distribution with a download option to stream the video contents. C) Launch a streaming server on Amazon EC2 (for example, Adobe media server) and store the video contents as an origin server. Configure the Amazon Cloudfront distribution with a download option to stream the video contents. D) Launch a streaming server on Amazon EC2 (for example, Adobe media server) and store the video contents as an origin server. Launch and configure the required amount of streaming servers on Amazon EC2 as an edge server to stream the video contents.

C. The application is not using valid security credentials to generate the pre-signed URL. F. The pre-signed URL has expired.

Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storeage Service (S3) so they can be transcoded into a different format. She creates AWS Indentity and Access Management (IAM) users for her application developers, and in just one week, thay have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is givent a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] } Which of the following are valid reasons for this behavior? Choose two answers A. The IAM role does not explicitly grant permission to upload the object. B. The contractorsˈ accounts have not been granted "write" access to the S3 bucket. C. The application is not using valid security credentials to generate the pre-signed URL. D. The developers do not have access to upload objects to the S3 bucket. E. The S3 bucket still has the associated default permissions. F. The pre-signed URL has expired.

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability )

Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]

Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier.

Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct? [PROFESSIONAL]

- Add a new database layer and then add recipes to the deploy actions of the database and App Server layers. (Refer link) - The variables that characterize the RDS database connection—host, user, and so on—are set using the corresponding values from the deploy JSON's [:deploy][:app_name][:database] attributes. (Refer link) - Set up the connection between the app server and the RDS layer by using a custom recipe. The recipe configures the app server as required, typically by creating a configuration file. The recipe gets the connection data such as the host and database name from a set of attributes in the stack configuration and deployment JSON that AWS OpsWorks installs on every instance. (Refer link)

Your mission is to create a lights-out datacenter environment, and you plan to use AWS OpsWorks to accomplish this. First you created a stack and added an App Server layer with an instance running in it. Next you added an application to the instance, and now you need to deploy a MySQL RDS database instance. Which of the following answers accurately describe how to add a backend database server to an OpsWorks stack? Choose 3 answers

(B) Establish the AWS presence In the us-EAST region, with a dedicated pipe to the corporate datacenter. (C) Use Amazon CloudFront to cache pages for users at the nearest edge location. (D) A three-subnet VPC,with an AD controller in the AWS region. The AWS AD controller will be part of the primary AD controller's forest, and will synchronize with the corporate controller over a dedicated pipe to the corporate data center.

Your multi-national customer wants to rewrite a website portal to "take advantage of AWS best practices". Other information that you have for this large Enterprise customer is as follow: - Part of the portal is an employee-only section, and authentication must be against the corporate Active Directory. - You used a web analytics website to discover that on average there were 140.000 visitors per month over the past year, a peak of 187.000 unique visitors last month, and a minimum of 109.000 unique visitors two months ago. You have no information about what percentage of these visitors represents employees who signed into the portal. - The web analytics website also revealed that traffic breakdown is 40 percent South America, 50 percent North America, and 10 percent other. - The customer's primary data center is located in So Paulo Brazil. - Their chief technology officer believes that response time for logging in to the employee portal is a primary metric, because employees complain that the current website is too slow in this regards. When you present your proposed application architecture to the customer, which of the following should you propose as part of the architecture? Choose 3 answers (A) Do not use Amazon CloudFront, because the employees who log in to the portal have unique (private) session data that should not be cached in a content delivery network. (B) Establish the AWS presence In the us-EAST region, with a dedicated pipe to the corporate datacenter. (C) Use Amazon CloudFront to cache pages for users at the nearest edge location. (D) A three-subnet VPC,with an AD controller in the AWS region. The AWS AD controller will be part of the primary AD controller's forest, and will synchronize with the corporate controller over a dedicated pipe to the corporate data center. (E) Establish the AWS presence in multiple regions: SA-EAST,and also US-EAST,with a dedicated pipe from both SA-EAST and US-EAST to the corporate data center - and also a dedicated connection between regions. Replicate data as needed between the regions. Use a geo load balancer to determine which region is primary for a given user. (F) A three-subnet VPC,with all AD calls traversing a dedicated pipe to the corporate data center.

D. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.

Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application? A. Use AWS Data Pipeline to replicate your DynamoDB tables into another region. B. Use the second AWS Elastic Beanstalk app to store a backup of Kinesis data onto Amazon Elastic Block Store (EBS), and then create snapshots from your Amazon EBS volumes. C. Add a second Amazon Kinesis stream in another Availability Zone and use AWS data pipeline to replicate data across Kinesis streams. D. Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you might need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery? [PROFESSIONAL]


Related study sets

US History - The Great Depression and The New Deal

View Set

Digestive System and Metabolism 3.2.2 & 3.2.5

View Set

Spirituality Custom Adaptive Quizzes

View Set

Module 2: Homeostasis and Regulation

View Set

Week 2: HIV/AIDS Animation Exercise (pathology B)

View Set

ANESTESIA TEMA 3 TRANQUILLANTI E SEDATIVI

View Set

Quiz 1, Unit 1 study guide, Quiz 3, Quiz 2, STUDY NOTES SOCIAL PSYCH FINAL

View Set