AWS Pro

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Your company runs a customer facing event registration site. This site is built with a 3-tier architecture with web and application tier servers and a MySQL database. The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?

A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment

A customer is in the process of deploying multiple applications to AWS that are owned and operated by defferent development teams. Each development team maintains the authorization of its users independently from other teams. The customer's information security team would like to be able to delegate use authoriztion to the individual development teams but independently apply restrictions to the users permissions based on factor such as the user's device and location. For example, the information security team would like to grant read-only permissions to a user who defined by the development team as read/write whenever the user is authenticating from outside the corporate network. What steps can the information security team take to implement this capability?

Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy. (Different policy with deny rules based on location, device and more restrictive wins)

You have been asked to virtually extend two existing data centers into AWS to support a highly available application that depends on existing, on0premises resources located in multiple data centers and static content that is served from an Amazon Simple Storeage Service (S3) bucket. Your disign current includes a dual-tunnel VPN connection between your CGW and VGW. Which component of your architecture represents a potential single point of failure that you should condider changing to make the solution more highly available?

Add another CGW in a different data center and create another dual-tunnel VPN connection. (Refer link)

Your company sells consumer devices and need to record the first activation of all sold devices. Devices are not activated untill the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The excution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in actinations, that is, for a few days there are 10 times or event 100 times activations than in the average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload?

Amazon DynamoDB and Amazon Elastic MapReduce with Spot instances.

Your social media monitoring application uses a Python app running on AWS Elastic Beanstalk to inject tweets, Facebook updates and RS feeds into an Amazon Kinesis stream. A second AWS Elastic Beanstalk app generates key performance indicators into an Amazon DynamoDB table and powers a dashboard application. What is the most efficient option to prevent any data loss for this application?

Add a third AWS Elastic Beanstalk app that uses the Amazon Kinesis S3 connector to archive data from Amazon Kinesis into Amazon S3.

You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines? [PROFESSIONAL]

Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. (Only CloudFormation's JSON Templates allow declarative version control of repeatedly deployable models of entire AWS clouds. Refer link)

Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you might need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery? [PROFESSIONAL]

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

You have been asked design network connectivity between your existing data centers and AWS. Your application's EC2 instances must be able to connect to existing backend resources located in your data center. Network traffic between AWS and your data centers will start small, but ramp up to 10s of GB per second over the course of serveral months. The success of your application is dependents upon getting to market quickly. Which of the following design options will allow you to meet your objectives?

Provision a VPN connection between a VPC and existing on -premises equipment, submit a DirectConnect partner request to provision cross connects between your data center and the DirectConnect location, then cut over from the VPN connection to one or more DirectConnect connections as needed.

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost? [PROFESSIONAL]

Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. (Because the instance will always be online during the day, in a predictable manner, and there are sequences of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "Full" level utilization purchases on EC2.)

You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application? [PROFESSIONAL]

Record the user's Information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app create temporary credentials using the AWS Security Token Service 'AssumeRole' function. Store these credentials in the mobile app's memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.

A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize costs. The cluster is designed to ingest 200TB of genomisc data with a total of 100 Amazon Elastic Compute Cloud (EC2) instances and is expected to run for around four hours. The resulting data set must be stored temporaly until archived into an Amazon Relational Database Service (RDS) Oracle instance. Which option will help save the most money while meeting requirements?

Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.

You've been tasked with moving an e-commerce web application from a customer's datacenter into a VPC. The application must be fault tolerant and well as highly scalable. Moreover, the customer is adamant that service interruptions not affect the user experience. As you near lanuch, you discover that the application currently uses multicast to share session state between web servers. in order to handle session state within the VPC, you shoose to.

Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)

Your customer is implementing a video on-demand streaming platform on AWS. The requirements are; support for multiple devices such as iOS, Andorid, and PC as client devices, using a standard client player, using streaming technology (not download,) and scalable architecture whit cost effectiveness. Which architecture meets the requirements?

Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents (Refer link)

A document storage company is deploying their application to AWS and changing their business model to support both Free Tier and Premium Tier users. The Premium Tier users will be allowed to store up to 200GB of data and Free Tier customers will allowed to store only 5GB. The customer expects that billions of files will be stored. All user need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use. To support the Free Tier and Premium TIer users, how should they architect their application?

The company should utilize an amazon simple work flow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholds.

Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements? [PROFESSIONAL]

Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. (CloudFormation allows source controlled, declarative templates as the basis for stack automation and Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed)

Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Menmbers of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. you don't want to create new IAM users for each NOC menber and make those users sign in again to the AWS Management Console. Which option below will meet the nees for your NOC members?

Use your on-premises SAML 2.O-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

An Enterprise customer is starting their migration to the cloud, their main reason for migrating is agility, and thay want to make their internal Micosoft Active Directory available to nay application running on AWS; this is to internal users only have to remember one set of credentials and as a central point of user control for leavers and joiners How could they make their Active Directory secure, and highly available, with minimal on-premises infrastructure changes, in the most cost and time-efficient way?

Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels, they could then have two domain controller instances that are joined to their existing domain and reside within different subnets in different availability zones.

You are putting together a WordPress site for a local charity and you are using a combination of Route53, Elastic Load Balancers, EC2 & RDS. You launch your EC2 instance, download WordPress and setup the configuration files connection string so that it can communicate to RDS. When you browse to your URL however, nothing happens. Which of the following could NOT be the cause of this

You have locked port 22 down to your specific IP address therefore users cannot access your site using HTTP/HTTPS

You are working with a customer who is using Chef configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

AWS OpsWorks

Your mission is to create a lights-out datacenter environment, and you plan to use AWS OpsWorks to accomplish this. First you created a stack and added an App Server layer with an instance running in it. Next you added an application to the instance, and now you need to deploy a MySQL RDS database instance. Which of the following answers accurately describe how to add a backend database server to an OpsWorks stack? Choose 3 answers

- Add a new database layer and then add recipes to the deploy actions of the database and App Server layers. (Refer link) - The variables that characterize the RDS database connection—host, user, and so on—are set using the corresponding values from the deploy JSON's [:deploy][:app_name][:database] attributes. (Refer link) - Set up the connection between the app server and the RDS layer by using a custom recipe. The recipe configures the app server as required, typically by creating a configuration file. The recipe gets the connection data such as the host and database name from a set of attributes in the stack configuration and deployment JSON that AWS OpsWorks installs on every instance. (Refer link)

Your application leveraging IAM Roles for EC2 accessing objects stored in S3. Which two of the following IAM policies control access to your S3 objects?

- An IAM trust policy allows the EC2 instance to assume an EC2 instance role. - An IAM access policy allows the EC2 role to access S3 objects

You are designing security your VPC. You are considering the options for establishing separate security zones, and enforcing network traffic rules across the different zones to limit which instances can communicate. How would you accomplish these requirements?

- Configure you instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication - Configure a security group for every zone. Configure allow rules only between zone that need to be able to communicate with one another. Use implicit deny all rule to block any other traffic

Your company runs a complex customer relations management system that consists of around 10 different software components all backed by the same Amazon Relation Database Service (RDS) database. you adopted AWS OpsWorks to simplify management and deployment of that application and created an AWS OpsWorks stack with layers for each of the individual components. An internal security policy requires that all instances should run on the latest Amazon Linux AMI and that instances must be replaced within one month after the latest Amazon Linux AMI has been released. AMI replacements should be done without incurring apllication downtime or capacity problems. You decide to write a script to be run as soon as a new Amazon Linux AMI is released. Which solutions support the security policy and meet your requirements?

- Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment) - Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

You are designing a connectivity solution between on-premises infrastructure and Amazon VPC Your server's on-premises will De communicating with your VPC instances You will De establishing IPSec tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers)

- Data encryption across the Internet - Protection of data in transit over the Internet - Peer identity authentication between VPN gateway and customer gateway - Data integrity protection across the Internet

A gaming company adopted AWS CoudFormation to automate load-testing of their games. They have created an AWS CloudFormation template for each gaming environment and one for the load-testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP Requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS CloudFormation stacks are torn down immediately. The test results written to the Amazon RD5 database must remain accessible for visualization and analysis. Select possible solutions that allow access to the test results after the AWS CloudFormation load-testing stack is detected.

- Define a deletion policy of type Retain for the Amazon QDS resource to assure that the RDS database is not deleted with the AWS CloudFormation stack. - Define a deletion policy of type Snapshot for the Amazon RDS resource to assure that the RDS database can be restored after the AWS CloudFormation stack is deleted.

A public archives organization is about to move a pilot application they are running on AWS into production. You have been hired to analyze their application architecture and give cost-saving recommendations. The application displays scanned historical documents. Each documents is split into individual image tiles at multiple zoom levels to improve responsiveness and ease of use for the end users. At maximum zoom level the average document will be 8000x6000 pixels in size, split into multiple 40px x 40px image tiles. The tiles are batch processed by Amazon Elastic Compute Cloud (EC2) instances and put into an Amazon Simple Storeage Service (S3) bucket. A browser-based JavaScript viewer fetches tiles from the Amazon (S3) bucket and displays them to users as thay zoom and pan around each document. The average storage size of all zoom levels for a document is approvizmately 30MB of JPEG tiles. Originals of each document are archived in Amazon Glacier. The company expects to process and host over 500.000 scanned documents in the first year. What are your recommendations?

- Deploy an Amazon CloudFront distribution in front of the Amazon 53 tiles bucket. - Decrease the size (width/height) of the individual tiles at the maximum zoom level. - Use Amazon 53 Reduced Redundancy Storage for each zoom level.

For a 3-tier, customer facing, inciement weather site utilizing a MySQL database running in a Region which has two AZs (Availability Zone), which architecture provides fault tolerance within the Region for the application that mninimally requires 6 web tier servers and 6 application tier servers running in the web and application tiers and one MySQL databse?

A web tier deployed across 2 AZs with 6 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 6 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and a Multi-AZ RDS (Relational Database Service) deployment. (As it needs Fault Tolerance with minimal 6 servers always available)

A media production company wants to deliver high-definition raw video material for preproduction and dubbing to customers all around the world. They would like to use Amazon CloudFront for their scenario, and they require the ability to limit downloads per customer and video file to a configurable number. A CloudFront download distribution with TTL = 0 was already setup to make sure all client HTTP request hit an authentication backent on Amazon Elastic Compute Cloud (EC2)/Amazon Relational Database Service (RDS) first, which is responsible for restricting the number of downloads. content is stores in Amazon Simple Storage (S3) and configured to be accessible only via CloudFront. What else needs to be done to achieve an architecture that meets the requirement?

- Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and return the content S3 URL unless the download limit is reached. - Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in RDS, and return a dynamically signed URL unless the download limit is reached.

A large enterprise wants to adopt CoudFormation to automate administrative tasks and implement the security principles of least priviledge and separation of duties. They have identified the following roles with the corresponding tasks in the company: - network administrators: create, modify and detele VPCs, subnets, NACLs, routing tables, and security groups - application operators: deploy complete application stacks (ELB, Auto-Scaling groups, RDS) whereas all resources must be deployed in the VPCs managed by the network administrators Both groups must maintain their own CloudFormation templates and should be able to create, update and delete only their own CloudFormation stacks. The company has followed your advice to create two IAMP groups, one for applications and one for networks. Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update and deletion of CloudFormation stacks. Given setup and requirements, which statements represents valid design condiderations?

- Network stack updates will fail upon attempts to delete a subnet with EC2 instances (Subnets cannot be deleted with instances in them) - Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group (IAM permissions need to be given explicitly to launch instances )

You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider?

- Place all your web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name. - Assign EIPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.

With which AWS services CloudHSM can be used (select 2)

- RDS - Amazon Redshift

A startup deploys its photo-sharing site in a VPC. An elastic load balancer sitributes web traffic across two subnets. The load balancer session stickiness is configured to use the AWS-generated session cookie, with a session TTL of 5 minutes. The web server Auto Scaling group is configured as min-size=4, max-size=4. The startup is preparing for a public launch, by running load-testing software installed on a single Amazon Elastic Compute Cloud (EC2) instance running in us-west-2a. After 60 minutes of load-testing, the web server logs show the following: .... Which recommendations can help ensure that load-testing HTTP request are evenly distributed across the four webservers?

- Re-configure the load-testing software to re-resolve DNS for each web request. (Refer link) - Use a third-party load-testing service which offers globally distributed test clients. (Refer link)

A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers) [PROFESSIONAL]

- The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. (Authenticates with LDAP and calls the AssumeRole) - Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. (Custom Identity broker implementation, with authentication with LDAP and using federated token)

Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storeage Service (S3) so they can be transcoded into a different format. She creates AWS Indentity and Access Management (IAM) users for her application developers, and in just one week, thay have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is givent a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos. Which of the following are valid reasons for this behavior?

- The application is not using valid security credentials to generate the pre-signed URL. - The pre-signed URL has expired.

To enable end-end HTTPS connections from the user's browser to the origin via CloudFront, which of the following options be valid?

- Use 3rd-party CA certificate in the origin and CloudFront default certificate in CloudFront - Use 3rd-party CA certificate in both origin and CloudFront

You tried to integrate two subsystems (front-end and back-end) with an HTTP interface to one large system. These subsystems don't store any state inside. All state is stored in an Amazon DynamoDB table. You have launched each of these two subsystems from a separate AMI. Black box testing has shown that these servers have stopped running and are issuing malformed requests that do not meet HTTP specifications from the clients. Your developers have discover and fixed this issue, and you deploy the fix to the two subsystems as soon as possible without service disruption. What are the most effective options to deploy the fixes?

- Use AWS OpsWorks auto healing for both the front'end and back-end instance pair. - Use Elastic Load Balancing in front of the front-end subsystem and Auto scaling to keep the specified number of instances - Use Elastic Load Balancing in front of the back-end subsystem and Auto scaling to keep the specified number of instances

A marketing research company has developed a tracking system that collects user behavior during web marketing campaigns on behalf of their customers all over the world. The tracking system consists of an auto-scaled group of Amazon Elastic Compute Cloud (EC2) instances behind an elastic load balancer (ELB), and the collected data is stored in Amazon DynamoDB. After the campain is terminated, the tracking system is torn down and the data is moved to Amazon Redshift, where it is aggregated, analyzed and used to generate detailed reports. The company wants to be able to instantiate new tracking systems in any region without any manual intervention and therefore adopted AWS CloudFormation. What need to be done to make sure that the AWS CloudFormation template works in every AWS region?

- Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource. - Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.

You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive

2 servers in each of AZ's a through e, inclusive. (You need to design for N+1 redundancy on Availability Zones. ZONE_COUNT = (REQUIRED_INSTANCES / INSTANCE_COUNT_PER_ZONE) + 1. To minimize cost, spread the instances across as many possible zones as you can. By using a though e, you are allocating 5 zones. Using 2 instances, you have 10 total instances. If a single zone fails, you have 4 zones left, with 2 instances each, for a total of 8 instances. By spreading out as much as possible, you have increased cost by only 25% and significantly de-risked an availability zone failure. Refer link)

When deploying a highly available 2-tier web application on AWS, which combination of AWS Services meets the requirements? 1. AWS Direct Connect 2. Amazon Route 53 3. AWS Storage Geteway 4. Elastic Load Balancing 5. Amazon EC2 6. Auto Scaling 7. Amazon VPC 8. AWS Cloud Trail

2,4,5 and 6

A development team that is currently doing a nightly six-hour build which is lengthening over time on -premises with a large and mostly underutilized server would like to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day. However, they are concerned about cost, security, and how to integrate with existing on-premises applications such as their LDAP and email servers which cannot move off-premises. The development environment needs a source code repository, a project management system with a MySQL database, resources for performing builds, and a storage location for QA to pick up builds from. What AWS services combination would you recommend to meet the development team;s requirements?

A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output. (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

A customer is running an application in US-West (northern California) region and wants to setup disaster recovery failover to the Asian Pacific (Singapore) region. The customer is interested in chieving alow Recovery Point Objective (RPO) for an Amazon Relational Database Service (RDS) multi-AZ MySQL database instance. which approach is best suited to this need?

Asynchronous replication

You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. You do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? [PROFESSIONAL]

Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. (Cost effective and can be in private subnet as well)

An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is. "Return all Items in this office for names starting with A through E". Which table configuration will result in the lowest impact on provisioned throughput for this query? [PROFESSIONAL]

Configure the table to have a range index on the name attribute, and a hash index on the office identifier

An AWS customer is deploying a web application that is composed of a front-end running on Amazon EC2 and confidential data that is stored on Amazon S3. The customers security policy requires that the all access operations to this sensitive data must be authenticated and authorized by centralized access managements system that is operated by separate security team. In addition, the a web application team that owns and administrators the EC2 web front-end instances is prohibited from having any ability to access the data that circumvents this centralized access management system. Which of the following configuration will support these requirements:

Configure the web application to authenticate end-users against the centralized access management system. Have the web application provision trusted users STS tokens entitling the download of approved data directly from Amazon S3 (Controlled access and admins cannot access the data as it needs authentication)

You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal. [PROFESSIONAL]

Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access

You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge? [PROFESSIONAL]

Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda. (Refer link)

Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest and cheapest way to reduce costs and scale with spikes like this? [PROFESSIONAL]

Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests, which can be served late. (CloudFront can server request from cache and multiple cache behavior can be defined based on rules for a given URL pattern based on file extensions, file names, or any portion of a URL. Each cache behavior can include the CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.)

You are tasked with the migration of a highly trafficked node.js application to AWS. In order to comply with organizational standards Chef recipes must be used to configure the application servers that host this application and to support application lifecycle events. Which deployment option meets these requirements while minimizing administrative burden?

Create a new stack within Opsworks add the appropriate layers to the stack and deploy the application

you have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL. Before generating the URL, the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?

Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise's account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions? [PROFESSIONAL]

Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials? [PROFESSIONAL]

Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way?

Create one AWS Ops Works stack, create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)

To meet regulatory requirement, a pharmaceuticals company need to archive data after a drug trial test concluded. Each drug trial test may generate up to several thousands of files, with conpressed file sizes ranging from 1 byte to 100MB. Once archived, data rarely needs to be restored, and on the rare occasion when restoration needed, the compnay has 24 hours to restore specific files that bmatch certain metadata. Searches must be possible by numeris file ID, drug name, participant names, date ranges, and other metadata. Which is the most-effective architectural approach that can meet ther requirement?

First, compress and then concatenate all files for a completed drug trial test into a single Amazon Glacier archive. Store the associated byte ranges for the compressed files along with other search metadata in an Amazon RDS database with regular snapshotting. When restoring data, query the database for files that match the search criteria, and create restored files from the retrieved byte ranges.

Your company has been contracted to develop and operate a website that tracks NBA basketball statistics. Statistical data to derive reports like "best game-winning shots from the regular season" and more frequently built reports like "top shots of the game" need to be stored durably for repeated lookup. Leveraging social media techniques, NBA fans submit and vote on new report types from the existing data set so the system needs to accommodate variability in data queries and new statis reports must be generated and posted daily. Initial research in the design phase indicates that there will be over 3 milion report queries on game day by end users and other application that use this application as a data source. It is expected that this system will gain in popularity over time and reach peaks of 10-15 milion report quireis of the system on game days. Select the answer that will allow your application to best meet these requrements while minimizing costs.

Generate reports from a multi-AZ MySQL Amazon RDS deployment and have an offline task put reports in Amazon Simple Storage Service (S3) and use CloudFront to cache the content. Use a TTL to expire objects daily. (Offline task with S3 storage and CloudFront cache)

Your company currently has a highly available web application running in production. The application's web front-end utilize an Elastic Load Balancer and Auto Scaling across three Availability Zones. During peak load, your web servers operate at 90% utilization and leverage a combination of Heavy Utilization Reserved Instances for steady state load and On-Demand and Spot Instances for peak load, You are tasked with designing a cost effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load. Which option provides the most most effective high availability architectural design for this application?

Increase auto scaling capacity and scaling thresholds to allow the web-front to cost-effectively scale across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability. (Ideal for HA to reduce and distribute load)

A utility company is building an application that stores data coming from more than 10.000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and would like to delete all data that is order than four weeks. Using Amazon DynamoDB for its scalability and rapidity, how would you implement this in the most most-effective way?

One table for each week, with a primary key that is the sensor ID and a hash key that is the timestamp (Composite key with Sensor ID and timestamp would help for faster queries)

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? [PROFESSIONAL]

Ingest data into a DynamoDB table and move old data to a Redshift cluster (Handle 10K IOPS ingestion and store data into Redshift for analysis)

Company A has hired you to assist with the migration of an interactive website that allows registered users to rate local restaurants. Updates to the ratings are displayed on the home page, and ratings are updated in real time. Although the website is not very popular today, the company anticipates that is will grow rapidly over the next few weeks. They want the site to be highly available. The current architecture consists of a single Windows Server 2008 R2 web server and a MySQL database running on Linux. Both reside inside an on-premises hypervisor. What would be the most efficient way to transer the application to AWS, ensuring performance and high-availability?

Launch two Windows Server 2008 R2 instances in us-west-1b and two in us-west-1a. Copy the web files from on premises web server to each Amazon EC2 web server, using Amazon S3 as the repository. Launch a multi-AZ MySQL Amazon RDS instance in us-west-2a. Import the data into Amazon RDS from the latest MySQL backup. Create an elastic load balancer to front your web servers. Use Route 53 and create an alias record pointing to the elastic load balancer. (Although RDS instance is in a different region which will impact performance, this is the only option that works.)

You are moving an existing traditional system to AWS, and during the migration discover that there is a master server which is a single point of failure. Having examined the implementation of the master server you realise there is not enough time during migration to re-engineer it to be highly available, though you do discover that it stores its state in a local MySQL database. In order to minimize down-time you select RD5 replace the local database and configure master to use it, what steps would best allow you to create a seft-healing architecture.

Migrate the local database into multi-AWS RDS database. Place master node into a multi-AZ auto-scaling group with a minimum of one and maximum of one with health checks

You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach?

Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records. (Use DynamoDB cross-regional replication version with two ELBs and ASGs with Route53 Failover and Latency DNS. Refer link)

Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct? [PROFESSIONAL]

Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier.

You are designing a file-sharing service. This service will have milions of files in it. Revenue for the service will come from fees based on how much storeage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you archive all of these goals in a way that is economical and can scale to millions of users.

Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

You are designing a file-sharing service. This service will have millions of files in it. Revenue for the service will come from fees based on how much storage a user is using. You also want to store metadata on each file, such as title, description and whether the object is public or private. How do you achieve all of these goals in a way that is economical and can scale to millions of users? [PROFESSIONAL]

Store all files in Amazon S3. Create Amazon DynamoDB tables for the corresponding key-value pairs on the associated metadata, when objects are uploaded.

You are an architect for a news-sharing mobile application. Anywhere in the world, your users can see local news on topics they choose. They can post picture and videos from inside the application. Since the application is being used on a mobile phone, connection stability is required for uploading content, and delivery should be quick. Content is accessed a lot in the first minutes after it has been posted, but is quickly replaced by new content before disappearing. The local nature of the news means that 90 percent of the uploaded content i then read locally )less than a hundred kilometers from where it was posted). What solution will optimize the user experience when users upload and view content (by minimizing page load times and minimizing upload times)?

Use an Amazon Cloud Front distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? [PROFESSIONAL]

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability )


संबंधित स्टडी सेट्स

CCNA2 Chapter 10 Device Discovery, Management, and Maintenance

View Set

AD RESEARCH FINAL: Descriptive Statistics

View Set

Chapter 7 (Exam 1), Chapter 6 (Exam 1), Chapter 5 (Exam 1), Chapter 4 (Exam 1), Chapter 3 (Exam 1), Chapter 2 (Exam 1), Chapter 1 (Exam 1)

View Set

Ch 1 Introduction to Lifespan Development

View Set

Pharm Made Easy 4.0 Cardiovascular system

View Set

EMT-B Chapter 3: Medical, Legal & Ethical

View Set

Chapter 50 Care of Surgical Patients Elsevier

View Set

MS2 Exam Two ATI Review Questions & Family Feud Questions

View Set