AWS Architect Professional - Test 2 from Basil_Udohdoh 80 real questions
An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software? A. AWS Elastic Beanstalk B. AWS Cloudfront C. AWS Cloudformation D. AWS DevOps
A. AWS Elastic Beanstalk Answer - A The Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. We can simply upload code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Meanwhile we can retain full control over the AWS resources used in the application and can access the underlying resources at any time. Hence, A is the CORRECT answer. For more information on launching a LAMP stack with Elastic Beanstalk:https://aws.amazon.com/getting-started/projects/launch-lamp-web-app/
Which of the following media servers can be used for live media streaming with CloudFront? Choose 3 options from the below: A. Adobe Media Server B. IIS Media services C. Atlassian Media Servers D. Wowza streaming engine
A. Adobe Media Server B. IIS Media services D. Wowza streaming engine Answer: A, B, and D You can use following live media servers for streaming media via CloudFront: Adobe Flash Media Server Windows IIS Media Services Wowza Streaming Engine Hence, options A, B, and D are CORRECT. For more information please refer to the links below: https://aws.amazon.com/blogs/aws/live-streaming-with-amazon-cloudfront-and-adobe-flash-media-server/https://aws.amazon.com/blogs/aws/smooth-streaming-with-cloudfront-and-windows-media-services/https://aws.amazon.com/cloudfront/streaming/
A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and keep the costs to a minimum.What AWS architecture would you recommend? A. Ask their customers to use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the 'username' policy variable. B. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. C. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user data startup script on each Instance. D. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for each customer with a bucket policy that permits access only to that one customer.
A. Ask their customers to use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the 'username' policy variable. Answer - A The main considerations in this scenario are: (1) the architecture should be scalable, (2) customer privacy should be maintained, and (3) the solution should be cost-effective. Option A is CORRECT because (a) it creates permissions via IAM policy where each user will have access to only those subdirectory named with their username, and (b) S3 is a cost-effective and highly scalable solution. Note: Even though creating one IAM User per user/customer is not the best way forwards, but given the other choices, this is the best option.Option B is incorrect because even though it uses RRS which is a less expensive solution than S3, creating one bucket per user is not a scalable architecture. Currently, the number of customers is 250, but in future the number can grow and if it does, it will put limits on the number of buckets.Option C is incorrect because creating auto-scaling group of FTP servers is a costly solution compared to creating buckets on S3 and appropriate IAM policies.Option D is incorrect because (a) creating one bucket per user is not a scalable architecture. Currently, the number of customers is 250, but in future the number can grow and if it does, it will put limits on the number of buckets, and (b) you configure buckets to be Requester Pays when you want to share the data but not incur charges associated with others accessing the data. This will keep the cost down for the company, but will increase the cost for the customer who will access the buckets.
Your customer wishes to deploy an enterprise application on AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database. The information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery, whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database.Which backup architecture will meet these requirements? A. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs, and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore. B. Backup RDS using a Multi-AZ Deployment. Backup the EC2 instances using AMIs, and supplement by copying file system data to S3 to provide file level restore. C. Backup RDS using automated daily DB backups. Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore. D. Backup RDS database to S3 using Oracle RMAN. Backup the EC2 instances using AMIs, and supplement with EBS snapshots for individual volume restore.
A. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs, and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore. Answer - A Option A is CORRECT because (a) it uses automated daily backups, from which the recovery can be made quickly, (b) the file-level backup to S3 will ensure that the recovery can be done at the individual file level - which satisfies the requirementsOption B is incorrect because Multi-AZ deployment is for Disaster Recovery, not for data backup. Option C is incorrect because Glacier is an archival solution and most certainly will not meet the criteria of RTO of 2 hours.Option D is incorrect because Amazon RDS does not use RMAN for backups. See the link given in the "More information" section. For more information on this topic, please visit the links below: http://www.boyter.org/wp-content/uploads/2014/12/Backup-And-Recovery-ApproachesUsing-Aws.pdf https://blogs.oracle.com/pshuff/amazon-rds
While managing your instances in the current Opswork stack, you suddenly started getting the following error: was::CharlieInstanceService::Errors::UnrecognizedClientException - The security token included in the request is invalid. Which of the below 2 check can be done to rectify this error? A. Check the IAM role which was attached to the instance. B. Check if the EIP have been added to the EC2 instances manually. C. Check if the stack is configured properly. D. Check if the Opswork client is configured properly.
A. Check the IAM role which was attached to the instance. B. Check if the EIP have been added to the EC2 instances manually. Answer: A and B. This can occur if a resource outside AWS OpsWorks on which the instance depends was edited or deleted. The following are examples of resource changes that can break communications with an instance. An IAM user or role associated with the instance has been deleted accidentally, outside of AWS OpsWorks Stacks. This causes a communication failure between the AWS OpsWorks agent that is installed on the instance and the AWS OpsWorks Stacks service. The IAM user that is associated with an instance is required throughout the life of the instance. Editing volume or storage configurations while an instance is offline can make an instance unmanageable. Adding EC2 instances to an EIP manually. AWS OpsWorks reconfigures an assigned Elastic Load Balancing load balancer each time an instance enters or leaves the online state. AWS OpsWorks only considers instances it knows about to be valid members; instances that are added outside of AWS OpsWorks, or by some other process, are removed. Every other instance is removed. For more information on troubleshooting Opswork, please visit the link: http://docs.aws.amazon.com/opsworks/latest/userguide/common-issues-troubleshoot.html
A company has a Direct Connect established between their on-premise location and AWS. The applications hosted on the on-premise location are experiencing high latency when using S3. What could be done to ensure that the latency to S3 can be reduced? A. Configure a public virtual interface to connect to a public S3 endpoint resource. B. Establish a VPN connection from the VPC to the public S3 endpoint. C. Configure a private virtual interface to connect to the public S3 endpoint via the Direct Connect connection. D. Add a BGP route as part of the on-premise router; this will route S3 related traffic to the public S3 endpoint to dedicated AWS region.
A. Configure a public virtual interface to connect to a public S3 endpoint resource. Answer - A You can create a public virtual interface to connect to public resources or a private virtual interface to connect to your VPC. You can configure multiple virtual interfaces on a single AWS Direct Connect connection, and you'll need one private virtual interface for each VPC to connect to. Each virtual interface needs a VLAN ID, interface IP address, ASN, and BGP key. See the image below: Option A is CORRECT because, as mentioned above, it creates a public virtual interface to connect to S3 endpoint.Option B is incorrect because to connect to S3 endpoint, a public virtual interface needs to be created, not VPN.Option C is incorrect because to connect to S3 endpoint, a public virtual interface needs to be created, not private.Option D is incorrect because this setup will not help connecting to the S3 endpoint. For more information on virtual interfaces, please visit the below URL http://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html
A company has 2 accounts- one is a development account and the other is the production account. There are 20 people on the development account who now need various levels of access provided to them on the production account. 10 of them need read-only access to all resources on the production account, 5 of them need read/write access to EC2 resources, and the remaining 5 only need read-only access to S3 buckets. Which of the following options would be the best way for both practically and security-wise to accomplish this task? Choose the correct answer from the below options: A. Create 3 roles in the production account with a different policy for each of the access levels needed. Add permissions to each IAM User on the developer account based on the type of access needed. B. Create 3 new users on the production account with the various levels of permissions needed. Give each of the 20 users the login for whichever one of the 3 users they need depending on the level of access required. C. Create encryption keys for each of the resources that need access and provide those keys to each user depending on the access required. D. Copy the 20 users IAM accounts from the development account to the production account. Then change the access levels for each user on the production account.
A. Create 3 roles in the production account with a different policy for each of the access levels needed. Add permissions to each IAM User on the developer account based on the type of access needed. Answer - A Option A is CORRECT because it creates 3 roles according to the need inside the production account and adds the permissions to each of the IAM User in development account to assume those roles accordingly.Option B is incorrect because you should be creating IAM Roles in the production account and the development users should assume those roles. This option is suggesting to create 3 separate users in production account which is incorrect.Option C is incorrect because creation of encryption keys is totally unnecessary and will not work in this scenario.Option D is incorrect because creation of the IAM user accounts in the production account is unnecessary. You should be creating IAM Roles instead. For more information on "Delegating Access Across AWS Accounts Using IAM Role" - please refer to the below link:https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
What are some of the best practices when managing permissions for OpsWork? Choose 3 answers from the below options: A. Create IAM User for each of your users and attach policies that provide appropriate access. B. Use the root account for managing the resources attached to OpsWork. C. Application developers need to access only the stacks that run their applications. D. Users should only have access permission to the resources they need as part of the OpsWork stack.
A. Create IAM User for each of your users and attach policies that provide appropriate access. C. Application developers need to access only the stacks that run their applications. D. Users should only have access permission to the resources they need as part of the OpsWork stack. Answer - A, C, and D Option A is CORRECT because instead of using root credentials, it is a better practice is to create an IAM User with appropriate policies attached to it.Option B is incorrect because using the root account credentials is not a secure and recommended practice.Option C is CORRECT because developers should not have access to stacks pertaining to any other applications than the ones they should be working on.Option D is CORRECT because users should have access to only those resources that pertain to the application they are working on. For more information on OpsWork best practices, please visit the link - http://docs.aws.amazon.com/opsworks/latest/userguide/best-practices-permissions.html
Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your AWS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend? A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. B. Create a new CloudTrail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs. C. Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected. Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. D. Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.
A. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. Answer - A For the scenarios where the application is tracking (or needs to track) the changes made by any AWS service, resource, or API, always think about AWS CloudTrail service. AWS Identity and Access Management (IAM) is integrated with AWS CloudTrail, a service that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticated AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets. The most important points in this question are (a) S3 bucket with global services option enabled, (b) Data integrity, and (c) Confidentiality. Option A is CORRECT because (a) it uses AWS CloudTrail with Global Option enabled, (b) a single new S3 bucket and IAM Roles so that it has the confidentiality, (c) MFA on Delete on S3 bucket so that it maintains the data integrity. See the AWS CloudTrail setting below which sets the Global Option. Options B is incorrect because (a) although it uses AWS CloudTrail, the Global Option is not enabled, and (b) SNS notifications can be a overhead in this situation. Option C is incorrect because (a) as an existing S3 bucket is used, it may already be accessed to the user, hence not maintaining the confidentiality, and (b) it is not using IAM roles. Option D is incorrect because (a) although it uses AWS CloudTrail, the Global Option is not enabled, and (b) three S3 buckets are not needed. For more information on Cloudtrail, please visit the below URL: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-global-service-events http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs? A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate the transactions from your on-premises database to a database instance in AWS across a secure VPN connection. B. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate the transactions from your on-premises database to a database instance in AWS across a secure VPN connection. C. Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload. D. Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate the transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate the transactions from your on-premises database to a database instance in AWS across a secure VPN connection. Answer - A Option A is CORRECT because (a) with AMIs, the newly created EC2 instances will be ready with the pre-installed application; thus, reducing the RTO, (b) with CloudFormation, the entire stack can be automatically provisioned, and (c) since there are no additional services used, the cost will stay low. Option B is incorrect because although this could work, (a) deploying EC2 instances for this scenario will be expensive, and (b) in case of disaster, the recovery will potentially be slower, since the new EC2 need to be manually updated with the application software and patches, especially since it does not use the AMIs. Option C is incorrect because it has a low performance issue. (a) Backing up local DB of 200GB on a 20Mbps connection every hour will be very slow, and (b) even with the incremental backup, recovering from the incremental backup take times and might not satisfy the given RTO. Option D is incorrect because (a) the EC2 instance is a single point of failure, which needs to be made highly available via an autoscaling, and (b) it can only handle the average load of the application; so, in case of peak load, it may fail, and (c) AWS Direct Connection will be an expensive solution compared to the setup of option A.
An organization has created multiple components of a single application. Currently, all the components are hosted on a single EC2 instance. Due to security reasons, the organization wants to implement 2 separate SSL certificates for the separate modules. How can the organization achieve this with a single instance? A. Create an EC2 instance which has multiple network interfaces with multiple elastic IP addresses. B. Create an EC2 instance which has both an ACL and the security group attached to it and have separate rules for each IP address. C. Create an EC2 instance which has multiple subnets attached to it and each will have a separate IP address. D. Create an EC2 instance with a NAT address.
A. Create an EC2 instance which has multiple network interfaces with multiple elastic IP addresses. Answer - A It can be useful to assign multiple IP addresses to an instance in your VPC to do the following: (1) Host multiple websites on a single server by using multiple SSL certificates on a single server and associating each certificate with a specific IP address.(2) Operate network appliances, such as firewalls or load balancers, that have multiple IP addresses for each network interface.(3) Redirect internal traffic to a standby instance in case your instance fails, by reassigning the secondary IP address to the standby instance. Option A is CORRECT because, as mentioned above, if you have multiple elastic network interfaces (ENIs) attached to the EC2 instance, each network IP can have a component running with a separate SSL certificate.Option B is incorrect because having separate rules in security group as well as NACL does not mean that the instance supports multiple SSLs.Option C is incorrect because an EC2 instance cannot have multiple subnets.Option D is incorrect because NAT address is not related to supporting multiple SSLs. For more information on Multiple IP Addresses, please refer to the link below:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html
Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security keys are allowed to be stored on the EC2 instance. Choose 3 options from the below: A. Create an IAM Role that allows write access to the DynamoDB table B. Add an IAM Role to a running EC2 instance C. Create an IAM User that allows write access to the DynamoDB table D. Add an IAM User to a running EC2 instance E. Launch an EC2 Instance with the IAM Role included in the launch configuration
A. Create an IAM Role that allows write access to the DynamoDB table B. Add an IAM Role to a running EC2 instance E. Launch an EC2 Instance with the IAM Role included in the launch configuration Answer - A,B and E. To enable an AWS service to access another one, the most important requirement is to create an appropriate IAM Role and attaching that role to the service that needs the access. Option A is CORRECT because it create the appropriate IAM Role for accessing the DynamoDB table.Option B is CORRECT because you can attach the role to a running EC2 instance that needs the access.Option C and D are incorrect because IAM Role is preferred and more secured way than IAM User.Option E is CORRECT because it launches the EC2 instance after attaching the required role. See the steps below: 1. Create the IAM Role with appropriate permissions 2. Launch an EC2 instance with this role 3. Attach the role to a running EC2 Reference Link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html https://aws.amazon.com/about-aws/whats-new/2017/02/new-attach-an-iam-role-to-your-existing-amazon-ec2-instance/
Which of the following are best practices that need to be followed when updating Opswork stack instances with the latest security patches? Choose 2 correct options from the below: A. Create and start new instances to replace your current online instances. B. run the Update Dependencies stack command for Linux based instances. C. Delete the entire stack and create a new one. D. Use Cloudformation to deploy the security patches.
A. Create and start new instances to replace your current online instances. B. run the Update Dependencies stack command for Linux based instances. Answers: A and B The best practices for updating your OpsWork stacks instances with the latest security patches: Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command which installs the current set of security patches and other updates on the specified instances. For more information on OpsWork Linux security updates best practices, please visit the link -https://docs.aws.amazon.com/opsworks/latest/userguide/workingsecurity-updates.html
When one creates an encrypted EBS volume and attach it to a supported instance type, which of the following data types are encrypted? Choose 3 options from the below: A. Data at rest inside the volume B. All data copied from the EBS volume to S3 C. All data moving between the volume and the instance D. All snapshots created from the volume
A. Data at rest inside the volume C. All data moving between the volume and the instance D. All snapshots created from the volume Answer - A, C, and D Amazon EBS encryption offers a simple encryption solution for your EBS volumes without the need to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:(i) Data at rest inside the volume(ii) All data moving between the volume and the instance(iii) All snapshots created from the volume(iv) All volumes created from those snapshots Based on this, options A, B, and D are all CORRECT.Option B is incorrect since the data that is copied to S3 is not encrypted. For more information on this, please visit the link below:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
Your company's on-premises content management system has the following architecture: Application Tier - Java code on a JBoss application server Database Tier - Oracle database regularly backed up to Amazon Simple Storage Service (S3) using the Oracle RMAN backup utility Static Content - stored on a 512GB gateway stored Storage Gateway volume attached to the application server via the iSCSI interface Which AWS based disaster recovery strategy will give you the best RTO? A. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server. B. Deploy the Oracle database on RDS. Deploy the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon Glacier. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server. C. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content by attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server. D. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content from an AWS Storage Gateway-VTL running on Amazon EC2
A. Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server. Answer - A Option A is CORRECT because (i) it deploys the Oracle database on EC2 instance by restoring the backups from S3 which is quick, and (ii) it generates the EBS volume of static content from Storage Gateway. Due to these points, option A meet the best RTO compared to all the remaining options. Option B is incorrect because restoring the backups from the Amazon Glacier will be slow and will not meet the RTO.Option C is incorrect because there is no need to attach the Storage Gateway as an iSCSI volume; you can just easily and quickly create an EBS volume from the Storage Gateway. Then you can generate snapshots from the EBS volumes for better recovery time.Option D is incorrect as restoring the content from Virtual Tape Library will not fit into the RTO.
A web company is looking to implement an external payment service into their highly available application deployed in a VPC. Their application EC2 instances are behind a public facing ELB. Auto scaling is used to add additional instances as traffic increases under normal load. The application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API. How should they architect their solution? A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances. B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway. C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB. D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.
A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances. Answer - A Option A is CORRECT because (a) the requests originated from the instances in the subnet would be routed through the NAT, so the requests would have the NAT's IP address (which is whitelisted), and (b) two NAT instances would provide high availability. Option B is incorrect because (a) Internet Gateway (IGW) can only route the traffic, it cannot whitelist any particular IP and payment requests, and (b) EC2 instances with public IP addresses in a public subnet are routed through the gateway, but will keep their own IP address, so they will not get whitelisted.Option C is incorrect because the outbound traffic cannot be routed through an ELB.Option D is incorrect because, the ASG will have 6 servers during the peak load, and the payment service only allows 4 to be whitelisted; so, it will exceed the allowed 4 IP addresses.
Which of the following AWS services can be used to define alarms to trigger on a certain activity in the AWS Data pipeline? A. SNS B. SQS C. SES D. CodeDeploy
A. SNS Answer - A Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan-out messages to large numbers of recipients. Amazon SNS makes it simple and cost effective to send push notifications to mobile device users, email recipients or even send messages to other distributed services. For more information on SNS, please refer to the below link https://aws.amazon.com/sns/
You have two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. One instance is running a database and the other instance an application that will interface with the database. You want to confirm that they can talk to each other for your application to work properly. Which two things do we need to confirm in the VPC settings so that these EC2 instances can communicate inside the VPC? Choose 2 correct options from the below: A. Security groups are set to allow the application host to talk to the database on the right port/protocol. B. Both instances are the same instance class and using the same key-pair. C. That the default route is set to a NAT instance or internet Gateway (IGW) for them to communicate. D. A network ACL that allows communication between the two subnets.
A. Security groups are set to allow the application host to talk to the database on the right port/protocol. D. A network ACL that allows communication between the two subnets. Answer - A and D In order to have the instances communicate with each other, you need to properly configure both Security Group and Network access control lists (NACLs). For the exam, remember that Security Group operates at the instance level; where as, the NACL operates at subnet level.Option A is CORRECT because the security groups must be defined in order to allow web server to communicate with the database server. An example image from the AWS documentation is given below: Option B is incorrect because it is not necessary to have the two instances of the same type or be using same key-pair.Option C is incorrect because configuring NAT instance or NAT gateway will not enable the two servers to communicate with each other. NAT instance/NAT gateway are used to enable the communication between instances in the private subnets and internet.Option D is CORRECT because the two servers are in two separate subnets. In order for them to communicate with each other, you need to have the NACL's configured as shown below: For more information on VPC and Subnets, please visit the below URL: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
You currently have developers who have access to your production AWS account? There is a concern raised that the developers could potentially delete the production-based EC2 resources. Which of the below options could help alleviate this concern? Choose 2 options from the below: A. Tag the production instances with a production-identifying tag and add resource-level permissions to the developers with an explicit deny on the terminate API call to instances with the production tag. B. Tag the instance with a production-identifying tag and modify the employees group to allow only start, stop, and reboot API calls and not the terminate instance call. C. Modify the IAM policy on the developers to require MFA before deleting EC2 instances and disable MFA access to the employee D. Modify the IAM policy on the developers to require MFA before deleting EC2 instances
A. Tag the production instances with a production-identifying tag and add resource-level permissions to the developers with an explicit deny on the terminate API call to instances with the production tag. B. Tag the instance with a production-identifying tag and modify the employees group to allow only start, stop, and reboot API calls and not the terminate instance call. Answer - A and B To stop the users from manipulating any AWS resources, you can either create the applicable (allow/deny) resource level permissions and apply them to those users, or create an individual or group policy which explicitly denies the action on that resource and apply it to the individual user or the group. Option A is CORRECT because it (a) identifies the instances with proper tag, and (b) creates a resource level permission and explicitly denies the user the terminate option.Option B is CORRECT because it (a) identifies the instances with proper tag, and (b) creates a policy with explicit deny of terminating the instances and applies that policy to the group which contains the employees (who are not supposed to have the access to terminate the instances).Option C and D are incorrect because MFA is an additional layer of security given to the users for logging into AWS and accessing the resources. However, either enabling or disabling MFA cannot prevent the users from performing resource level actions. More information on TagsTags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you've assigned to it. Each tag consists of a key and an optional value, both of which you define.For more information on tagging AWS resources please refer to the below URLhttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision me web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements.Which design would you choose to meet these requirements? A. Use AWS Data Pipeline to schedule a DynamoDB cross region copy once a day, create a "Last updated" attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. B. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. C. Use AWS data pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. D. Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region.
A. Use AWS Data Pipeline to schedule a DynamoDB cross region copy once a day, create a "Last updated" attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter. Answer - A Option A is CORRECT because the DynamoDB data can be copied across region by AWS Data Pipeline.Option B is incorrect because (a) there is no schedule control in EMR, (b) this is a significant change to the current architecture, and (c) scan operation is expensive.Option C is incorrect because (a) involving S3 is unnecessary as the data can directly be replicated to table in another region via AWS Data Pipeline, and (b) this is a big change in the existing architecture which can and should be avoided.Option D is incorrect because (a) involving SQS is unnecessary as the data can directly be replicated to table in another region via AWS Data Pipeline, and (b) this is not an automated way. For more information on this topic, please visit the link below: https://aws.amazon.com/blogs/aws/copy-dynamodb-data-between-regions-using-the-aws-data-pipeline/
A legacy application is being migrated to AWS. It works on the TCP protocol. There is a requirement to ensure scalability of the application and also ensure that records of the client IP using the application are recorded. Which of the below-mentioned steps would you implement to fulfill the above requirement? A. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two or more application servers in different AZs. B. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs. C. Use Route 53 with Latency Based Routing enabled to distribute load on two or more application servers in different AZs. D. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.
A. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two or more application servers in different AZs. Answer - A AWS ELB has support for Proxy Protocol. It simply depends on a humanly readable header with the client's connection information to the TCP data sent to your server. As per the AWS documentation, the Proxy Protocol header helps you identify the IP address of a client when you have a load balancer that uses TCP for back-end connections. Because load balancers intercept traffic between clients and your instances, the access logs from your instance contain the IP address of the load balancer instead of the originating client. You can parse the first line of the request to retrieve your client's IP address and the port number. Option A is CORRECT because it implements the proxy protocol and uses ELB with TCP listener.Option B is incorrect because, although implementing cross-zone load balancing provides high availability, it is not going to give the IP address of the clients.Option C is incorrect because Route53 latency based routing does not give the IP address of the clients.Option D is incorrect because Route53 Alias record does not give the IP address of the clients. For more information on ELB enabling support for TCP, please refer to the links given below: https://aws.amazon.com/blogs/aws/elastic-load-balancing-adds-support-for-proxy-protocol/ https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html
A custom script needs to be passed to a new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this? A. User data B. EC2Config service C. IAM roles D. AWS Config
A. User data Answer - A When you configure an instance during creation, you can add custom scripts to the User data section. So in Step 3 of creating an instance, in the Advanced Details section, we can enter custom scripts in the User Data section. The below script installs Perl during the instance creation of the EC2 instance. For more information on user data please refer to the URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Your supervisor is upset about the fact that SNS topics that he subscribed to are now cluttering up his email inbox. How can he stop receiving the email from SNS without disrupting other users' ability to receive the email from SNS? Choose 2 options from the below: A. You can delete the subscription from the SNS topic responsible for the emails B. You can delete the endpoint from the SNS subscription responsible for the emails C. You can delete the SNS topic responsible for the emails D. He can use the unsubscribe information provided in the emails
A. You can delete the subscription from the SNS topic responsible for the emails D. He can use the unsubscribe information provided in the emails Answer - A and D Every request has a unsubscribe URL which can be used. Also from the aws console , one can just delete the subscription Option A is CORRECT because deleting the subscription for the user from the SNS topic will ensure that he will not receive any notifications (basically just unsubscribe him).Option B is incorrect because you cannot delete the endpoint from the SNS subscription.Option C is incorrect because if you delete the topic then none of the subscribers will get any notifications. Option D is CORRECT because the notifications has an option to unsubscribe which the user can avail to stop receiving the notifications. For more information on SNS subscription please visit the below link http://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html
You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check but these unhealthy instances are not being terminated. What do you need to do to ensure that the instances marked unhealthy by the ELB will be terminated and replaced? A. Change the thresholds set on the Auto Scaling group health check B. Add an Elastic Load Balancing health check to your Auto Scaling group C. Increase the value for the Health check interval set on the Elastic Load Balancer D. Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks
B. Add an Elastic Load Balancing health check to your Auto Scaling group Answer - B To discover the availability of your EC2 instances, an ELB periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks. The status of the instances that are healthy at the time of the health check is InService. The status of any instances that are unhealthy at the time of the health check is OutOfService.When you allow the Auto Scaling group (ASG) to receive the traffic from the ELB, it gets notified when the instance becomes unhealthy and then it terminates it. See the images in the "More information..." section for more details. Option A is incorrect because changing the threshold will not enable ASG to know about the unhealthy instances.Option B is CORRECT because when you associate the ELB with ASG, you allow the ASG to receive the traffic from that ELB. As a result, the ASG will get aware about the unhealthy instances and it terminates them.Option C is incorrect because increasing the interval will still not communicate the information about the unhealthy instances to the ASG.Option D is incorrect because this setting will not communicate the information about the unhealthy instances to the ASG either. More information on ELB with Auto Scaling Group: For more information on ELB, please visit the below URL:https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 options from the below: A. Amazon Simple Email Service B. Amazon CloudWatch C. Amazon Simple Queue Service D. Amazon Route 53 E. Amazon Simple Notification Service
B. Amazon CloudWatch E. Amazon Simple Notification Service Answer - B and E. Option A is incorrect as SNS would be a better choice for sending real time notifications compared to SES.Option B is CORRECT because CloudWatch is used for monitoring the metrics pertaining to the AWS resources.Option C is incorrect because SQS can neither monitor any metrics, nor send out any real time notifications.Option D is incorrect because Route 53 cannot monitor any metrics.Option E is CORRECT because SNS is used for sending the real time notifications based on the thresholds set in CloudWatch. For more information on cloudwatch metrics, please refer to the link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use? A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput. B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. C. Amazon ElastiCache to store the writes until the writes are committed to the database. D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. Answer - B Option A and D are incorrect because the peak amount of traffic is undetermined; so you cannot provision the "provisioned IOPS" beforehand.Option B is CORRECT because SQS is AWS managed and highly scalable service that can hold the database write requests in the queue, and ensure that no writes will be dropped.Option C is incorrect because ElastiCache is used for read-heavy applications to reduce the load on the database (not to cache the writes). For more information on SQS, please read the related questions in the FAQshttps://aws.amazon.com/sqs/faqs/
You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements? A. Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. B. Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. C. Configure a single routing table with two default routes: one to the Internet via an Internet gateway the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in your VPC. D. Configure two routing tables: one that has a default route via the Internet gateway, and another that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet.
B. Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. Answer - B Option A and C are incorrect because, two default routes cannot be configured in the route table.Option B is CORRECT because with this setup, the route via the BGP(which is specific) will be preferred over the one via Internet gateway (default).Option D is incorrect because the subnet in which the instances are placed, can have a single routing table associated with them. More information on Route Tables and VPN Route Priority::https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#route-tables-priority https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html#vpn-route-priority
You are managing a legacy application inside VPC with hard-coded IP addresses in its configuration. Which mechanisms will allow the application to failover to new instances without the need for reconfiguration? Choose 2 options from the below: A. Create an ELB to reroute traffic to the failover instance B. Create a secondary ENI that can be moved to the failover instance C. Use Route53 health checks to reroute the traffic to the failover instance D. Assign a secondary private IP address to the primary ENI of the failover instance
B. Create a secondary ENI that can be moved to the failover instance D. Assign a secondary private IP address to the primary ENI of the failover instance Answer - B and D Option A is incorrect because rerouting to a failover instance in case of hardcoded IP address is not possible via ELB.Option B is CORRECT because the attributes of a network interface follow it as it's attached or detached from an instance and reattached to another instance. When you move a network interface from one instance to another, network traffic is redirected to the new instance.Option C is incorrect because Route 53 cannot reroute the traffic between the to failover instance with the same IP address.Option D is CORRECT because you can have a secondary IP address that can be configured on the primary ENI of the failover instance. Best Practices for Configuring Network Interfaces You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach).You can detach secondary (ethN) network interfaces when the instance is running or stopped. However, you can't detach the primary (eth0) interface.You can attach a network interface in one subnet to an instance in another subnet in the same VPC; however, both the network interface and the instance must reside in the same Availability Zone.When launching an instance from the CLI or API, you can specify the network interfaces to attach to the instance for both the primary (eth0) and additional network interfaces.Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance.A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves.Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance.If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a Secondary Private IPv4 Address. For more information on Network Interfaces, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
A user is trying to save some cost on the AWS services. Which of the below-mentioned options will not help him to save cost? A. Delete the unutilized EBS volumes once the instance is terminated. B. Delete the AutoScaling launch configuration after the instances are terminated. C. Release the elastic IP if not required once the instance is terminated. D. Delete the AWS ELB after all the instances behind it are terminated.
B. Delete the AutoScaling launch configuration after the instances are terminated. Answer - B Option A is incorrect because EBS volumes do have a costing aspect and hence deleting the unutilized volumes will save some cost.Option B is CORRECT because an unused AutoScaling launch configuration will not cost anything. Option C is incorrect because non-associated Elastic IP will cost you if not released. Option D is incorrect because an ELB without any instances behind it incurs costs. For more information on AWS Pricing, please visit the link:https://aws.amazon.com/pricing/services/
Server-side encryption is about data encryption at rest. That is, Amazon S3 encrypts your data at the object level as it writes it to disk in its data centers and decrypts it for you when you go to access it. There are a few different options depending on how you choose to manage the encryption keys. One of the options is called 'Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)'. Which of the following best describes how this encryption method works? Choose the correct option from the below: A. Answer - C Option A is incorrect because restoring the backups from Amazon Glacier would be slow and will definitely not meet the RTO and RPO. Option B is incorrect because with the synchronous replication you cannot go back to point in time recovery. You will always have the latest data. Option C is CORRECT because it takes hourly backups to Amazon S3 - which makes restoring the backups quick, and since the transaction logs are stored in S3 every 5 minutes, it will help to restore the application to a state that is within the RPO of 15 minutes. Option D is incorrect because instant store volume is ephemeral. i.e. the data can get lost when the instance is terminated. NOTE:Although Glacier supports expedited retrieval (On-Demand and Provisioned), it is an expensive option and is recommended only for occasional urgent request for a small number of archives. Having said this (and even if we go with glacier as solution), the option also mentions taking database snapshots every 15 minutes. Now if you keep taking backups every 15 mins, the database users are going to face lot of outages during the backup (due to I/O suspension especially in non-AZ deployment). Also, within 15 minutes the backup process may not even finish! As an architect you need to use the database change (transaction) logs along with the backups to restore your database to a point in time. Since option (c) stores the transaction details up to last 5 minutes, you can easily restore your database and meet the RPO of 15 minutes. Hence, C is the best choice. B. Each object is encrypted with a unique key employing strong encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. C. You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disk, and decryption, when you access your objects. D. A randomly generated data encryption key is returned from Amazon S3, which is used by the client to encrypt the object data.
B. Each object is encrypted with a unique key employing strong encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Answer - B Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. Option A is incorrect because there are no separate permissions to the key that protects the data key.Option B is CORRECT because as mentioned above, each object is encrypted with a strong unique key and that key itself is encrypted by a master key.Option C is incorrect because the keys are managed by the AWS.Option D is incorrect because there is no randomly generated key and client does not do the encryption. For more information on S3 encryption, please visit the linkhttps://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
You have multiple Amazon EC2 instances running in a cluster across multiple Availability Zones within the same region. What combination of the following should be used to ensure the highest network performance (packets per second), lowest latency, and lowest jitter? Choose 3 options from the below: A. Amazon EC2 placement groups B. Enhanced networking C. Amazon PV AMI D. Amazon HVM AMI E. Amazon Linux F. Amazon VPC Endpoints
B. Enhanced networking D. Amazon HVM AMI F. Amazon VPC Endpoints Answer - B, D, and F Option A is incorrect because placement groups will not work in multiple Availability zones.Option B is CORRECT because Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types.Option C is incorrect because it is recommended to use HVM AMIs for better performance compared to PV AMIs.option D is CORRECT because HVM AMIs take advantage of Enhanced Networking; whereas PV AMIs do not.Option E is incorrect because using Amazon Linux does not necessarily improve any performance.Option F is CORRECT because VPC endpoints allow communication between instances in the VPC and AWS services without imposing availability risks or bandwidth constraints on the network traffic. For more information on Enhanced Networking, please visit the URL http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance. For the best performance, we recommend that you use current generation instance types and HVM AMIs when you launch your instances. For more information on Enhanced Networking, please visit the URLhttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html
A company has configured and peered two VPCs: VPC-1 and VPC-2. The VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increase the fault tolerance of the connection to VPC-1? Choose 2 answers: A. Establish a hardware VPN over the internet between VPC-2 and the on-premises network. B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2. D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1. (need to be in the same region as VPC-1). E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.
B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network. E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1. Answer - B & E Option A & C are incorrect because peered VPC does not support Edge-to-Edge Routing, so connecting to VPC2 will not work.Option B is CORRECT because hardware VPN can be created to connect the VPC-1 with the on-premises network.Option D is incorrect because AWS Direct Connect is a regional service and you cannot reach VPC1 if the direct connect is in a different region.Option E is CORRECT because AWS Direct Connect is a regional service and will work if it is in the same region as that of the VPC-1. For more information on VPC peering, please see the links below:https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html
You have created an Elastic Load Balancer with Duration-Based sticky sessions enabled in front of your six EC2 web application instances in US-West-2. For High Availability, there are three web application instances in Availability Zone 1 and three web application instances in Availability Zone 2. To load test, you set up a software-based load tester in Availability Zone 2 to send traffic to the Elastic Load Balancer, as well as letting several hundred users browse to the ELB's hostname. After a while, you notice that the users' sessions are spread evenly across the EC2 instances in both AZ's, but the software-based load tester's traffic is hitting only the instances in Availability Zone 2. What steps can you take to resolve this problem? Choose 2 correct options from the below: A. Create a software-based load tester in US-East-1 and test from there. B. Force the software-based load tester to re-resolve DNS before every request. C. Use a third party load-testing service to send requests from globally distributed clients. D. Switch to application-controlled sticky sessions.
B. Force the software-based load tester to re-resolve DNS before every request. C. Use a third party load-testing service to send requests from globally distributed clients. Answer - B and C When you create an elastic load balancer, a default level of capacity is allocated and configured. As Elastic Load Balancing sees changes in the traffic profile, it will scale up or down. The time required for Elastic Load Balancing to scale can range from 1 to 7 minutes, depending on the changes in the traffic profile. When Elastic Load Balancing scales, it updates the DNS record with the new list of IP addresses. To ensure that clients are taking advantage of the increased capacity, Elastic Load Balancing uses a TTL setting on the DNS record of 60 seconds. It is critical that you factor this changing DNS record into your tests. If you do not ensure that DNS is re-resolved or use multiple test clients to simulate increased load, the test may continue to hit a single IP address when Elastic Load Balancing has actually allocated many more IP addresses. Because your end users will not all be resolving to that single IP address, your test will not be a realistic sampling of real-world behavior. Option A is incorrect because creating load tester in US-East-1 will face the same problem of traffic hitting only the instances in that AZ.Option B is CORRECT because if you do not ensure that DNS is re-resolved the test may continue to hit the single IP address.Option C is CORRECT because if the requests come from globally distributed users, the DNS will not be resolved to a single IP address and the traffic would be distributed evenly across multiple instances.Option D is incorrect because the traffic will be routed to the same back-end instances as the users continue to access your application. The load will not be evenly distributed across the AZs. Please refer to the below article for more information: http://aws.amazon.com/articles/1636185810492479
There are currently multiple applications hosted in a VPC. During monitoring, it has been noticed that multiple port scans are coming in from a specific IP Address block. The internal security team has requested that all offending IP Addresses be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from the specified IP Addresses? A. Create an AD policy to modify the Windows Firewall settings on all hosts in the VPC to deny access from the IP Address block. B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP Address block. C. Add a rule to all of the VPC Security Groups to deny access from the IP Address block. D. Modify the Windows Firewall settings on all AMI's that your organization uses in that VPC to deny access from the IP address block.
B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP Address block. Answer - B A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. Option A and D are incorrect because (a)it will only work for windows-based instances, and (b)better approach is to block the traffic at the subnet layer via NACL rather than instance layer (windows firewall).Option B is CORRECT because the best way to allow or deny IP address-based access to the resources in the VPC is to configure rules in the Network access control list (NACL) which are applied at the subnet level.Option C is incorrect because (a)you cannot explicitly deny access to particular IP addresses via security group, and (b)better approach is to block the traffic at the subnet layer via NACL rather than instance layer (security group). For more information on network ACL's please refer to the below link: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
Which of the following are the ways to minimize the attack surface area as a DDOS minimization strategy in AWS? Choose 3 options from the below: A. Configure services such as Elastic Load Balancing and Auto Scaling to automatically scale. B. Reduce the number of necessary Internet entry points. C. Separate end user traffic from management traffic. D. Eliminate non-critical Internet entry points.
B. Reduce the number of necessary Internet entry points. C. Separate end user traffic from management traffic. D. Eliminate non-critical Internet entry points. Answer - B, C, and D Some important consideration when architecting on AWS is to limit the opportunities that an attacker may have to target your application. For example, if you do not expect an end user to directly interact with certain resources you will want to make sure that those resources are not accessible from the Internet. Similarly, if you do not expect end-users or external applications to communicate with your application on certain ports or protocols, you will want to make sure that traffic is not accepted. This concept is known as attack surface reduction. Option A is incorrect because it is used for mitigating the DDoS attack where the system scales to absorb the application layer traffic in order to keep it responsive. Option B, C and D are all CORRECT as they all are used for reducing the DDoS attack surface. For more information on DDoS attacks in AWS, please visit the below URL https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
The Marketing Director in your company asked you to create a mobile app that lets users post sightings of good deeds known as random acts of kindness in 80-character summaries. You decided to write the application in JavaScript so that it would run on the broadest range of phones, browsers, and tablets. Your application should provide access to Amazon DynamoDB to store the good deed summaries. Initial testing of a prototype shows that there aren't large spikes in usage. Which option provides the most cost-effective and scalable architecture for this application? A. Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) on an EC2 instance to provide signed credentials mapped to an Amazon Identity and Access Management (IAM) user allowing DynamoDB puts and S3 gets. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB. B. Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB. C. Provide the JavaScript client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) to provide signed credentials mapped to an IAM user allowing DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates DynamoDB. D. Register the JavaScript application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow DynamoDB puts. You serve your mobile application out of Apache EC2 instances that are load-balanced and autoscaled. Your EC2 instances are configured with an IAM role that allows DynamoDB puts. Your server updates DynamoDB.
B. Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB. Answer - B This scenario asks to design a cost-effective and scalable solution where a multi-platform application needs to communicate with DynamoDB. For such scenarios, federated access to the application is the most likely solution. Option A is incorrect because the Token Vending Machine (STS Service) is implemented on a single EC2 instance which is a single point of failure. This is not a scalable solution either as the instance can become the performance bottleneck.Option B is CORRECT because, (i) it authenticates the application via federated identity provider such as Amazon, Google, Facebook etc, (ii) it sets up the proper permission for DynamoDB access, and (iii) S3 website which supports Javascript - is a highly scalable and cost effective solution.Option C is incorrect because deploying EC2 instances in auto-scaled environment is not as cost-effective solution as the S3 website, even though it is scalable.Option D is incorrect because (i) it does not mention any security token service that generates temporary credentials, and (ii) deploying EC2 instances in auto-scaled environment is not as cost-effective solution as the S3 website, even though it is scalable.
You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded and then runs a transcoding process. If this process is interrupted, the videos will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost-efficient way? A. Reserved instances B. Spot instances C. Dedicated instances D. On-demand instances
B. Spot instances Answer - B Since this is like a batch processing job, the best type of instance to use is a Spot instance. Since these jobs don't last for the entire duration of the year, they can bid upon and allocated and deallocated as requested. Option A and C are incorrect because the application need the instances only until the backlog is reduced. With reserved/dedicated instances, there is a possibility that the instances might get idle after the backlog reduction. So, this is a costly solution.Option B is CORRECT because (i) they are less expensive than reserved instances, (ii) interruption in the transcoding process is affordable since the videos will be transcoded by another instance based on the queuing system.Option D is incorrect because (i) on-demand instances are most expensive, (ii) you can afford interruption in the transcoding process, and (iii) on demand instances would have been suited if there was no alternate way of transcoding the videos and interruption was not affordable. For more information on Spot Instances, please visit the URL -https://aws.amazon.com/ec2/spot/
You currently have a placement group of instances. When you try to add new instances to the group, you receive a 'capacity error'. Which of the following actions will most likely fix this problem? Choose the correct option from the below: A. Make a new Placement Group and launch the new instances in the new group. Make sure the Placement Groups are in the same subnet. B. Stop and restart the instances in the Placement Group and then try the launch again. C. Request a capacity increase from AWS as you are initially limited to 10 instances per Placement Group. D. Make sure all the instances are the same size and then try the launch again.
B. Stop and restart the instances in the Placement Group and then try the launch again. Answer - B Option A is incorrect because to benefit from the enhanced networking, all the instances should be in the same Placement Group. Launching the new ones in a new Placement Group will not work in this case.Option B is CORRECT because the most likely reason for the "Capacity Error" is that the underlying hardware may not have the capacity to launch any additional instances on it. If the instances are stopped and restarted, AWS may move the instances to a hardware that has capacity for all the requested instances.Option C is incorrect because there is no such limit on the number of instances in a Placement Group (however, you can not exceed your EC2 instance limit allocated to your account per region).Option D is incorrect because the capacity error is not related to the instance size and just ensuring that the instances are of same size will not resolve the capacity error. More information on Cluster Placement Group If you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and try the launch again. Restarting the instances may migrate them to hardware that has the capacity for all the requested instances. For more information on this, please refer to the below URL http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
An organization is planning to setup a management network on the AWS VPC. The organization is trying to secure the web server on a single VPC instance such that it allows the internet traffic as well as the back-end management traffic. The organization wants to make sure that the back end management network interface can receive the SSH traffic only from a selected IP range, while the internet facing web server will have an IP address which can receive traffic from all the internet IPs. How can the organization achieve this by running the web server on a single instance? A. It is not possible to have 2 IP addresses for a single instance B. The organization should create 2 network interfaces, one for the internet traffic and the other for the backend traffic C. The organization should create 2 EC2 instances as this is not possible with one EC2 instance D. This is not possible
B. The organization should create 2 network interfaces, one for the internet traffic and the other for the backend traffic Answer - B An Elastic Network Interface (ENI) is a virtual network interface that you can attach to an instance in a VPC. Network interfaces are available only for instances running in a VPC. A network interface can include the following attributes: A primary private IPv4 address One or more secondary private IPv4 addresses One Elastic IP address (IPv4) per private IPv4 address One public IPv4 address One or more IPv6 addresses One or more security groups A MAC address A source/destination check flag A description See an example below how the route table can be configured to allow the IP based access via multiple ENIs. For more information on ENI , please refer to the below link http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
A media company produces new video files on-premises every day with a total size of around 100GB after compression. All files have a size of 1 -2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3 AM and 5 AM. Current upload takes almost 3 hours, although less than half of the available bandwidth is used. What step(s) would ensure that the file uploads are able to complete in the allotted time window? A. Increase your network bandwidth to provide faster throughput to S3 B. Upload the files in parallel to S3 C. Pack all files into a single archive, upload it to S3, and then extract the files in AWS D. Use AWS Import/Export to transfer the video files
B. Upload the files in parallel to S3 Answer - B When uploading large videos it's always better to make use of AWS multipart file upload, especially when the bandwidth is not fully utilized. Option A is incorrect because existing bandwidth itself is not fully utilized. Increasing the bandwidth is not going to help; in fact, it will add to the cost.Option B is CORRECT because parallel upload of the files via AWS multipart upload will fully utilize the available bandwidth and increase the throughput. It also has additional benefits as mentioned below in the "More Information" section.Option C is incorrect because there is a restriction on the size of upload in a single PUT operation. You cannot upload a file of size more than 5GB in a single upload. So this option is not going to help at all. You need to use multipart upload.Option D is incorrect because this option requires you to put all the files daily on a storage drive and send it to AWS. Since the data has to be uploaded in a certain time frame and there is sufficient bandwidth already available, multipart upload is the best option compared to AWS Import/Export. More information on and benefits of Multipart upload on S3 Below is the advantage of multipart upload: Improved throughput—you can upload parts in parallel to improve throughput. Quick recovery from any network issues—smaller part size minimizes the impact of restarting a failed upload due to a network error. Pause and resume object uploads—you can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload. Begin a upload before you know the final object size—you can upload an object as you are creating it. For more information on Multi-part file upload for S3, please visit the URL - http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
Your company has a lot of GPU intensive workloads. Also, these workloads are part of a process in which some steps need manual intervention. Which of the below options works out for the above-mentioned requirement? A. Use AWS Data Pipeline to manage the workflow. Use an auto-scaling group of G2 instances in a placement group. B. Use Amazon Simple Workflow (SWF) to manage the workflow. Use an autoscaling group of G2 instances in a placement group. C. Use Amazon Simple Workflow (SWF) to manage the workflow. Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). D. Use AWS data Pipeline to manage the workflow. Use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).
B. Use Amazon Simple Workflow (SWF) to manage the workflow. Use an autoscaling group of G2 instances in a placement group. Answer - B Tip: Whenever the scenario in the question mentions about high graphical processing servers with low latency networking, always think about using G2 instances. And, when there are tasks involving human intervention, always think about using SWF. Option A is incorrect because AWS Data Pipeline cannot work in hybrid approach where some of the tasks involve human actions.Option B is CORRECT because (a) it uses G2 instances which are specialized for high graphical processing of data with low latency networking, and (b) SWF supports workflows involving human interactions along with AWS services.Option C is incorrect because it uses C3 instances which are used for situations where compute optimization is required. In this scenario, you should be using G2 instances.Option D is incorrect because (a) AWS Data Pipeline cannot work in hybrid approach where some of the tasks involve human actions, and (b) it uses C3 instances which are used for situations where compute optimization is required. In this scenario, you should be using G2 instances. More information on G2 instances:Using G2 instances is preferred. Hence option C and D are wrong. For more information on Instances types, please visit the below URL:https://aws.amazon.com/ec2/instance-types/Since there is an element of human intervention, SWF can be used for this purpose.For more information on SWF, please visit the below URL:https://aws.amazon.com/swf/
Which of the following are the recommendations from AWS when migrating a legacy application which is hosted on a virtual machine in an on-premise location? Choose 2 options from the below: A. Use a NAT instance to route traffic from the instance in the VPC. B. Use an Elastic IP address on the VPC instance C. Use entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses on the on-premise location D. Use the VM Import facility provided by aws.
B. Use an Elastic IP address on the VPC instance D. Use the VM Import facility provided by aws. Answers: B and D Option A is incorrect because having NAT instance is not going to help in this scenario. NAT instance is used so that the instances in the private subnet can communicate with the internet. Option B is CORRECT because using an elastic IP address you can mask the failure of an instance or the legacy app in this case by remapping the IP address to another functioning instance in a VPC subnet. Option C is incorrect because Route 53 cannot resolve any dependencies on the IP addresses. Option D is CORRECT because VM Import/Export enables you to easily import VM images from the on-premise location to the VPC in the form of EC2 instances, hence helping the migration of the legacy application.
<p>Why will the following CloudFormation template fail to deploy a stack? Choose the correct answer from the below options:</p> <pre class="prettyprint linenums">{ "AWSTemplateFormatVersion" : "2010-09-09", "Parameters" : { "VPCId" : { "Type": "String", "Description" : "Enter current VPC Id" }, "SubnetId : { "Type": "String", "Description" : "Enter a subnet Id" } }, "Outputs" : { "InstanceId" : { "Value" : { "Ref" : "MyInstance" }, "Description" : "Instance Id" } } }</pre> <p><br></p> A. CloudFormation templates do not use a "Parameters" section B. A "Conditions" section is mandatory but is not included C. A "Resources" section is mandatory but is not included D. A template description is mandatory but is not included
C. A "Resources" section is mandatory but is not included Answer - C Option C is CORRECT because, the Resources section is mandatory for the CloudFormation template to work; and it is missing in this template. For more information on CloudFormation templates, please refer to the below URL: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html
Your company is hosting an application on the cloud. Your IT Security department has recently noticed that there seem to be some SQL Injection attacks against the application. Which of the below approach provides a cost-effective scalable mitigation to this kind of attack? A. Create a DirectConnect connection so that your have a dedicated connection line. B. Add previously identified host file source IPs as an explicit INBOUND DENY NACL to the web tier subnet. C. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would pass the traffic to the current web tier. The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group D. Remove all but TLS 1 & 2 from the web tier ELB and enable Advanced Protocol Filtering. This will enable the ELB itself to perform WAF functionality.
C. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would pass the traffic to the current web tier. The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group Answer - B, C, and D Some important consideration when architecting on AWS is to limit the opportunities that an attacker may have to target your application. For example, if you do not expect an end user to directly interact with certain resources you will want to make sure that those resources are not accessible from the Internet. Similarly, if you do not expect end-users or external applications to communicate with your application on certain ports or protocols, you will want to make sure that traffic is not accepted. This concept is known as attack surface reduction. Option A is incorrect because it is used for mitigating the DDoS attack where the system scales to absorb the application layer traffic in order to keep it responsive. Option B, C and D are all CORRECT as they all are used for reducing the DDoS attack surface. For more information on DDoS attacks in AWS, please visit the below URLhttps://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
Which of the following are correct statements with policy evaluation logic in AWS Identity and Access Management? Choose 2 answers from the below options: A. An explicit deny does not override an explicit allow. B. By default, all request are allowed. C. An explicit allow overrides default deny. D. An explicit allow overrides an explicit deny. E. By default, all requests are denied.
C. An explicit allow overrides default deny. E. By default, all requests are denied. Answer - C and E Option A is incorrect because explicit deny always override an explicit allow.Option B is incorrect because all requests are denied by default.Option C is CORRECT because an explicit allow overrides the default deny.Option D is incorrect because explicit deny cannot be overridden by an explicit allow.Option E is CORRECT because all requests are denied by default. For more information on the IAM policy evaluation logic, please refer to the link http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html
In Amazon Cognito, your mobile app authenticates with the Identity Provider (IdP) using the provider's SDK. Once the end user is authenticated with the IdP, the OAuth or OpenID Connect token returned from the IdP is passed by your app to Amazon Cognito. Which of the following is returned for the user to provide a set of temporary, limited-privilege AWS credentials? A. Cognito SDK B. Cognito Key pair C. Cognito Identity ID D. Cognito API
C. Cognito Identity ID Answer - C If you're allowing unauthenticated users, you can retrieve a unique Amazon Cognito identifier (identity ID) for your end user immediately. If you're authenticating users, you can retrieve the identity ID after you've set the login tokens in the credentials provider For more information on Cognito ID, please refer to the below link: http://docs.aws.amazon.com/cognito/latest/developerguide/getting-credentials.html
There is a requirement to carry out the backup of an Oracle RAC cluster which is currently hosted on the AWS public cloud. How can this be achieved? A. Create manual snapshots of the RDS backup and write a script that runs the manual snapshot B. Enable Multi-AZ failover on the RDS RAC cluster to reduce the RPO and RTO in the event of disaster or failure. C. Create a script that runs snapshots against the EBS volumes to create backups and durability. D. Enable automated backups on the RDS RAC cluster; enable auto snapshot copy to a backup region to reduce RPO and RTO.
C. Create a script that runs snapshots against the EBS volumes to create backups and durability. Answer - C Currently, Oracle Real Application Cluster (RAC) is not supported as per the AWS documentation. However, you can deploy scalable RAC on Amazon EC2 using the recently-published tutorial and Amazon Machine Images (AMI). So, in order to take the backups, you need to take the backup in the form of EBS volume snapshots of the EC2 that is deployed for RAC. Option A, B, and D are all incorrect because RDS does not support Oracle RAC.Option C is CORRECT because Oracle RAC is supported via the deployment using Amazon EC2. Hence, for the data backup, you can create a script that takes the snapshots of the EBS volumes. For more information on Oracle RAC on AWS, please visit the below URL:https://aws.amazon.com/about-aws/what's-new/2015/11/self-managed-oracle-rac-on-ec2/https://aws.amazon.com/articles/oracle-rac-on-amazon-ec2/https://aws.amazon.com/blogs/database/amazon-aurora-as-an-alternative-to-oracle-rac/
You are using DynamoDB to store data in your application. One of the tables named "Users", you have defined "UserID" as it primary key. However, you envision that, in some cases, you might need to query the table by "UserName" which cannot be set as primary key. What changes would you do to this table to be able to query using UserName? Choose correct option from the below: A. Create a second table that contains all the information, but make UserName the primary key. B. Create a hash and range primary key. C. Create a secondary index. D. Partition the table using UserName rather than UserID.
C. Create a secondary index. Answer - C Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes. Option A is incorrect because creating another table is costly and unnecessary.Option B is incorrect because UserName cannot be primary key.Option C is CORRECT because, as mentioned above, creating a secondary index on UserName would allow the user to efficiently access the table via querying on this attribute rather than UserID which is the primary key.Option D is incorrect because DynamoDB tables are partitioned based on the primary key and you cannot make UserName as the primary key. http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
A company has recently started using Docker cloud. This is a SaaS solution for managing Docker containers on the cloud. There is a requirement for the SaaS solution to access AWS resources. Which of the following options would meet the requirement for enabling the SaaS solution to work with AWS resources in the most secured manner? A. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account. B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application. Create a new access and secret key for the user and provide these credentials to the SaaS provider. C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application. D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required for the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.
C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application. Answer - C When a user, a resource, an application, or any service needs to access any AWS service or resource, always prefer creating appropriate role that has least privileged access or only required access, rather than using any other credentials such as keys. Option A is incorrect because you should never share your access and secret keys. Option B is incorrect because (a) when a user is created, even though it may have the appropriate policy attached to it, its security credentials are stored in the EC2 which can be compromised, and (b) creation of the appropriate role is always the better solution rather than creating a user. Option C is CORRECT because AWS role creation allows cross-account access to the application to access the necessary resources. See the image and explanation below: Many SaaS platforms can access AWS resources via a Cross-account access created in AWS. If you go to Roles in your identity management, you will see the ability to add a cross-account role. Option D is incorrect because the role is to be assigned to the application and it's resources, not the EC2 instances. For more information on the cross-account role, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
An auditor has been called upon to carry out an audit of the configuration of your AWS accounts. The auditor has specified that they just want to read only access to the AWS resources on all accounts. Which of the below options would help the auditor get the required access? A. Create an IAM user for each AWS account with read-only permission policies for the auditor, and disable each account when the audit is complete. B. Configure an on-premise AD server and enable SAML and identify federation for single sign-on to each AWS account. C. Create an IAM role with read-only permissions to all AWS services in each AWS account. Create one auditor IAM account and add a permissions policy that allows the auditor to assume the ARN role for each AWS account that has an assigned role. D. Create a custom identity broker application that allows the auditor to use existing Amazon credentials to log into the AWS environments.
C. Create an IAM role with read-only permissions to all AWS services in each AWS account. Create one auditor IAM account and add a permissions policy that allows the auditor to assume the ARN role for each AWS account that has an assigned role. Answer - C Option A is incorrect because creating an IAM User for each AWS account is an overhead and less preferred way compared to creating IAM Role.Option B is incorrect because the scenario says that the company does not have any on-premises identity provider.Option C is CORRECT because it creates an IAM Role which has all the necessary permission policies attached to it which allows the auditor to assume the appropriate role while accessing the resources.Option D is incorrect because using the IAM Role that has the required permissions is the preferred and more secure way of accessing the AWS resources than using the Amazon credentials. Also, this option does not use any Security Token Service that gives temporary credentials to login. Hence this is a less secure way of accessing the AWS resources. For more information on IAM roles please refer to the below URL http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
An application, basically a mobile application needs access for each user to store data in a DynamoDB table. What is the best method for granting each mobile device that ensures the application has access DynamoDB tables for storage when required? Choose the correct options from the below: A. During the install and game configuration process, have each user create an IAM credential and assign the IAM user to a group with proper permissions to communicate with DynamoDB. B. Create an IAM group that only gives access to your application and to the DynamoDB tables. Then, when writing to DynamoDB, simply include the unique device ID to associate the data with that specific user. C. Create an IAM role with the proper permission policy to communicate with the DynamoDB table. Use web identity federation, which assumes the IAM role using AssumeRoleWithWebIdentity, when the user signs in, granting temporary security credentials using STS. D. Create an Active Directory server and an AD user for each mobile application user. When the user signs in to the AD sign-on, allow the AD server to federate using SAML 2.0 to IAM and assign a role to the AD user which is the assumed with AssumeRoleWithSAML.
C. Create an IAM role with the proper permission policy to communicate with the DynamoDB table. Use web identity federation, which assumes the IAM role using AssumeRoleWithWebIdentity, when the user signs in, granting temporary security credentials using STS. Answer - C Option A is incorrect because IAM Roles are preferred over IAM Users, because IAM Users have to access the AWS resources using access and secret keys, which is a security concern.Option B is this is not a feasible configuration.Option C is CORRECT because it (a) creates an IAM Role with the needed permissions to connect to DynamoDB, (b) it authenticates the users with Web Identity Federation, and (c) the application accesses the DynamoDB with temporary credentials that are given by STS.Option D is incorrect because the step to create the Active Directory (AD) server and using AD for authenticating is unnecessary and costly. See the image below for more information on AssumeRoleWithWebIdentity API. For more information on web identity federation please refer to the below link http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html
As an IT administrator, you have been requested to manage the CloudFormation stacks for a set of developers in your company. A set of web and database developers will be working on the application. How would you design the CloudFormation stacks in the best way possible? A. CloudFormation is not the right fit, use OpsWork instead. B. Create one stack for the web and database developers. C. Create separate stacks for the web and database developers. D. Define separate EC2 instances since defining CloudFormation can get cumbersome.
C. Create separate stacks for the web and database developers. Answer - C Option A is incorrect because CloudFormation is best for creating and maintaining all the infrastructure resources in the cloud environment.Option B is incorrect because as your stack grows in scale and broadens in scope, managing a single stack can be cumbersome and time consuming. Also, coordinating and communicating updates can become difficult.Option C is CORRECT because (a) having multiple (or sub) stacks is easier to maintain, (b) there is a clear separation of ownership and concerns, (c) better chances of you staying within the limit for 'Template body size' which happens to be 460,800 bytes, and (d) you can reuse common template patterns. See "More information..." section for more details.Option D is incorrect because you can provision and maintain the infrastructure if the CloudFormation templates are created correctly. More information on CloudFormation Best Practices: The following use case scenario is given in the AWS documentation to support the answer: For example, imagine a team of developers and engineers who own a website that is hosted on autoscaling instances behind a load balancer. Because the website has its own lifecycle and is maintained by the website team, you can create a stack for the website and its resources. Now imagine that the website also uses back-end databases, where the databases are in a separate stack that is owned and maintained by database administrators. Whenever the website team or database team needs to update their resources, they can do so without affecting each other's stack. If all resources were in a single stack, coordinating and communicating updates can be difficult. For more information on Cloudformation best practices, please visit the below URL http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html
By default, when an EBS volume is attached to a Windows instance, it may show up as any drive letter on the instance. For which services can you use to change the settings of the drive letters of the EBS volumes per your specifications? A. EBSConfig Service B. AMIConfig Service C. EC2Config Service D. EC2-AMIConfig Service
C. EC2Config Service Answer - C Windows AMIs include an optional service called the EC2Config service (EC2Config.exe). EC2Config starts when the instance boots and performs tasks during startup and each time you stop or start the instance. EC2Config can also perform tasks on demand. Some of these tasks are automatically enabled, while others must be enabled manually. Although optional, this service provides access to advanced features that aren't otherwise available. This service runs in the LocalSystem account. For more information on EC2 Config service, please visit the link http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html
Which of the following features ensures even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer? A. Elastic Load Balancing request routing B. An Amazon Route 53 weighted routing policy C. Elastic Load Balancing cross-zone load balancing D. An Amazon Route 53 latency routing policy
C. Elastic Load Balancing cross-zone load balancing Answer: C Option A is incorrect because there is no request routing option available on ELB.Option B is incorrect because Route 53 Weighted Routing will help resolving the DNS requests to different endpoints. Even though it is a DNS level load balancing, it will not help balancing the load on instances across multiple availability zones while being able to register/unregister instances based on the health check. That functionality is carried out by ELB.Option C is CORRECT because you can enable the "Cross Zone Load Balancing" on ELB to even distribution of the traffic across instances in multiple AZs. See the image below: Option D is incorrect because Route 53 Latency Based Routing resolves the DNS queries with the resources that provide the best latency. It will not help in this scenario. To get more information on ELB cross load balancing, please refer to the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html
You have been asked to design network connectivity between your existing data centers and AWS. Your application's EC2 instances must be able to connect to existing backend resources located in your data center. Network traffic between AWS and your data centers will start small, but ramp up to 10s of GB per second over the course of several months. The success of your application is dependent upon getting to market quickly. Which of the following design options will allow you to meet your objectives? A. Quickly create an internal ELB for your backend applications, submit a Direct Connect request to provision a 1 Gbps cross connect between your data center and VPC, then increase the number or size of your Direct Connect connections as needed. B. Allocate EIPs and an Internet Gateway for your VPC instances to use for quick, temporary access to your backend applications, then provision a VPN connection between a VPC and existing on -premises equipment. C. Provision a VPN connection between a VPC and existing on-premises equipment, submit a Direct Connect partner request to provision cross connects between your data center and the Direct Connect location, then cut over from the VPN connection to one or more Direct Connect connections as needed. D. Quickly submit a Direct Connect request to provision a 1 Gbps cross connect between your data center and VPC, then increase the number or size of your Direct Connect connections as needed.
C. Provision a VPN connection between a VPC and existing on-premises equipment, submit a Direct Connect partner request to provision cross connects between your data center and the Direct Connect location, then cut over from the VPN connection to one or more Direct Connect connections as needed. Answer - C The most important considerations in this scenario are: (1) the network traffic would be initially small, and will increase increase in future, and (2) the application should be up quickly, so time is critical. One thing should be noted that it takes time initially to set up the AWS Direct Connect (See the link below for latest information).https://docs.aws.amazon.com/directconnect/latest/UserGuide/getting_started.html Option A is incorrect because setting up of Direct Connect will take time; so, the backend servers will not be connected in quick time.Option B is incorrect because provisioning VPN only is not a long term solution since the traffic would increase to over 10Gbps.Option C is CORRECT because (a) it provides quick connection between the on-premise data center and AWS via VPN, and (b) it also initiates the provision of a Direct Connect solution to tackle the requirement of higher bandwidth (for 10Gbps network) for later.Option D is incorrect because setting up of Direct Connect will take time and the application will not be up within time as it is time critical. For more information on VPN and Direct Connect, please visit the link below:https://datapath.io/resources/blog/aws-direct-connect-vs-vpn-vs-direct-connect-gateway/
A user is planning to set up-the Multi-AZ feature of RDS. Which of the below-mentioned conditions won't take advantage of the Multi-AZ feature? A. Availability zone outage B. A manual failover of the DB instance using Reboot with failover option C. Region outage D. When the user changes the DB instance's server type
C. Region outage Answer - C Amazon RDS handles failovers automatically so you can resume database operations as quickly as possible without administrative intervention. The primary DB instance switches over automatically to the standby replica if any of the following conditions occur:An Availability Zone outage The primary DB instance fails The DB instance's server type is changed The operating system of the DB instance is undergoing software patchingA manual failover of the DB instance was initiated using Reboot with failover Hence, option A, B and D are incorrect. Option C is CORRECT because if there is a region-wide failure, the Multi-AZ feature may not work. For more information on multiAZ RDS please visit the link: https://aws.amazon.com/rds/details/multi-az/
You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this? Choose the correct answer from the below options: A. Use CloudFront distributions for static content. B. Store photos on an EBS volume of the web server. C. Remove public read access and use signed URLs with expiry dates. D. Block the IPs of the offending websites in Security Groups.
C. Remove public read access and use signed URLs with expiry dates. Answer - C You can distribute private content using a signed URL that is valid for only a short time—possibly for as little as a few minutes. Signed URLs that are valid for such a short period are good for distributing content on-the-fly to a user for a limited purpose, such as distributing movie rentals or music downloads to customers on demand. Option A is incorrect because using CloudFront is an expensive option compared to using signed URLs.Option B is incorrect because the website is hosted on S3.Option C is CORRECT because, as mentioned above, it will ensure that only the trusted/authenticated users get access to the content.Option D is incorrect because the website is hosted on S3 which does not have any security group setting. For more information on Signed URL's please visit the below link http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
You've created a temporary application that accepts image uploads, stores them in S3, and records the information about the images in RDS. After building this architecture and accepting the images for the duration required, it's time to delete the CloudFormation template. However, your manager has informed you that, for some reason, they need to ensure that a backup is taken of the RDS when the CloudFormation template is deleted. Which of the options below will fulfill the above requirement? A. Enable S3 bucket replication on the source bucket to a destination bucket to maintain a copy of all the S3 objects, set the deletion policy for the RDS instance to delete. B. For both the RDS and S3 resource types on the CloudFormation template, set the DeletionPolicy to Retain. C. Set the DeletionPolicy on the RDS resource to snapshot. D. Set the DeletionPolicy on the RDS resource to retain.
C. Set the DeletionPolicy on the RDS resource to snapshot. Answer - C The main point in this scenario is that even if the CloudFormation stack is deleted there should be a way to able to restore the RDS data if needed. Option A is incorrect because the DeletionPolicy of the RDS instance should be set to snapshot. If delete is used, the resource would get deleted and the dat cannot be restored in the future.Option B is incorrect because DeletionPolicy attribute for RDS should be snapshot, not retain because with snapshot option, the backup of the RDS instance would be stored in the form of snapshots (which is the requirement). With retain option, CF will keep the RDS instance alive which is unwanted. There is such no requirement on S3.Option C is CORRECT because it correctly sets the DeletionPolicy of the RDS to snapshot so that the data can be restored from the snapshot if needed.Option D is incorrect because it sets the DeletionPolicy of the RDS to retain which will keep the RDS instance alive. It just needs to take the snapshot. More information on DeletionPolicy on CloudFrontDeletionPolicy options include:Retain: You retain the resource in the event of a stack deletion.Snapshot: You get a snapshot of the resource before it's deleted. This option is available only for resources that support snapshots.Delete: You delete the resource along with the stack. This is the default outcome if you don't set a DeletionPolicy.To keep or copy resources when you delete a stack, you can specify either the Retain or Snapshot policy options.With the DeletionPolicy attribute, you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. For more information on Cloudformation deletion policy, please visit the below URL http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
You are responsible for a web application that consists of an Elastic Load Balancer (ELB) in front of an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances. For a recent deployment of a new version of the application, a new Amazon Machine Image (AMI) was created, and the Auto Scaling group was updated with a new launch configuration that refers to this new AMI. During the deployment, you received complaints from users that the website was responding with errors. All instances passed the ELB health checks. What should you do in order to avoid errors for future deployments? (Choose 2 answers) A. Add an Elastic Load Balancing health check to the Auto Scaling group. Set a short period for the health checks to operate as soon as possible in order to prevent premature registration of the instance to the load balancer. B. Enable EC2 instance CloudWatch alerts to change the launch configuration AMI to the previous one. Gradually terminate instances that are using the new AMI. C. Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail. D. Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration. E. Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer.
C. Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail. D. Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration. Answers - C & D In this scenario, the ELB health check was passed which implies that the instances were successfully deployed using the new AMIs by the launch configuration and auto scaling group. The deployment was successful, but as the users started using the application, they started receiving the error. So, it implies that the errors are related to the application itself, not the setup. Option A is incorrect because setting the short period of health check will not be useful in this scenario.Option B is incorrect because you cannot change the launch configuration based on the CloudWatch alert.Option C is CORRECT because, the current health check might be just checking if the application/web site is reachable or not. I.e. It may not be currently checking whether the application is fully functioning. If the health check is configured to test the part of the application that fully tests it, it would stop deploying the instances with the faulty application.Option D is CORRECT because doubling the auto scaling size will give some lead time for instances to become healthy while the AMI with old update gets terminated (kind of Blue/Green Deployment).Option E is incorrect because increasing the unhealthy threshold will not help in this scenario since it does not prevent unhealthy instances from being deployed.
An ERP application is deployed in multiple Availability Zones in a single region. In the event of failure, the RTO must be less than 3 hours and the RPO is 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. Which DR strategy can be used to achieve this RTO and RPO in the event of this kind of failure? A. Take 15-minute DB backups stored in Amazon Glacier, with transaction logs stored in Amazon S3 every 5 minutes. B. Use synchronous database master-slave replication between two Availability Zones. C. Take hourly DB backups to Amazon S3, with transaction logs stored in S3 every 5 minutes. D. Take hourly DB backups to an Amazon EC2 instance store volume, with transaction logs stored in Amazon S3 every 5 minutes.
C. Take hourly DB backups to Amazon S3, with transaction logs stored in S3 every 5 minutes. Answer - C Option A is incorrect because restoring the backups from Amazon Glacier would be slow and will definitely not meet the RTO and RPO. Option B is incorrect because with the synchronous replication you cannot go back to point in time recovery. You will always have the latest data. Option C is CORRECT because it takes hourly backups to Amazon S3 - which makes restoring the backups quick, and since the transaction logs are stored in S3 every 5 minutes, it will help to restore the application to a state that is within the RPO of 15 minutes. Option D is incorrect because instance store volume is ephemeral. i.e. the data can get lost when the instance is terminated. NOTE:Although Glacier supports expedited retrieval (On-Demand and Provisioned), it is an expensive option and is recommended only for occasional urgent request for a small number of archives. Having said this (and even if we go with glacier as solution), the option also mentions taking database snapshots every 15 minutes. Now if you keep taking backups every 15 mins, the database users are going to face lot of outages during the backup (due to I/O suspension especially in non-AZ deployment). Also, within 15 minutes the backup process may not even finish! As an architect you need to use the database change (transaction) logs along with the backups to restore your database to a point in time. Since option (c) stores the transaction details up to last 5 minutes, you can easily restore your database and meet the RPO of 15 minutes. Hence, C is the best choice.
Which of the below components is used by AWS Data Pipeline to poll for tasks and then performs those tasks? A. Definition Syntax File B. S3 C. Task Runner D. AWS OpsWork
C. Task Runner Answer - C Task Runner is a task agent application that polls AWS Data Pipeline for scheduled tasks and executes them on Amazon EC2 instances, Amazon EMR clusters, or other computational resources, reporting status as it does so. For more information on the Taskrunner in AWS pipeline, please refer to the below link http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
You created three S3 buckets - "mydomain.com", "downloads.mydomain.com", and "www.mydomain.com". You uploaded your files, enabled static website hosting, specified both of the default documents under the "enable static website hosting" header, and set the "Make Public" permission for the objects in each of the three buckets. All that's left for you to do is to create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter? Choose an option from the below: A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file B. http://www.mydomain.com will not work because the URL does not include a file name at the end of it C. There will be no problems, all three sites should work D. http://downloads.mydomain.com/index.html will not work because the "downloads" prefix is not a supported prefix for S3 websites using Route 53 aliases
C. There will be no problems, all three sites should work Answer - C Previously only allowed domain prefix when we are creating AWS Route53 aliases for AWS S3 static websites was the "www".However, this is no longer the case. You can now use other sub-domains. For more information on S3 web site hosting please visit the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
You are building a website that will retrieve and display highly sensitive information to users. The amount of traffic the site will receive is known and not expected to fluctuate. The site will leverage SSL to protect the communication between the clients and the web servers. Due to the nature of the site you are very concerned about the security of your SSL private key and want to ensure that the key cannot be accidentally or intentionally moved outside your environment. Additionally, while the data the site will display is stored on an encrypted EBS volume, you are also concerned that the web servers' logs might contain some sensitive information; therefore, the logs must be stored so that they can only be decrypted by employees of your company. Which of these architectures meets all of the requirements? A. Use Elastic Load Balancing to distribute traffic to a set of web servers. To protect the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key. B. Use Elastic Load Balancing to distribute traffic to a set of web servers. Use TCP load balancing on the load balancer and configure your web servers to retrieve the private key from a private Amazon S3 bucket on boot. Write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. C. Use Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. D. Use Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
C. Use Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. Answer - C Option A and D both are incorrect because the logs - which contain the sensitive information - are written to ephemeral volume. So there are chances that the data can get lost upon termination of the EC2 instance.Option B is incorrect because it does not use a secure way of managing the SSL private key for SSL transaction.Option C is CORRECT because it uses CloudHSM for performing the SSL transaction without requiring any additional way of storing or managing the SSL private key. This is the most secure way of ensuring that the key will not be moved outside of the AWS environment. Also, it uses the highly available and durable S3 service for storing the logs. More information on AWS CloudHSM:The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM. For more information on AWS CloudHSM, please refer to the link: https://aws.amazon.com/cloudhsm/
As a solution architect professional, you have been requested to launch 20 Large EC2 instances which will all be used to process huge amounts of data. There is also a requirement that these instances will need to transfer data back and forth among each other. Which of the following would be the most efficient setup to achieve this? Choose the correct option from the below: A. Ensure that all the instances are placed in the same region. B. Ensure that all instances are placed in the same availability zone. C. Use Placement Groups and ensure that all instances are launched at the same time. D. Use the largest EC2 instances currently available on AWS and make sure they are spread across multiple availability zones.
C. Use Placement Groups and ensure that all instances are launched at the same time. Answer - C Option A is incorrect because being in the same region would not mean that the data transfer between the instances would be any faster. In fact, the instances would experience network latency.Option B is incorrect because just being in the same AZ is not sufficient; they should be added to a Placement Group to benefit from the low network latency.Option C is CORRECT because Placement Group enables applications to get the low-latency network performance necessary for tightly-coupled node-to-node communication typical of many high-performance computing applications. Option D is incorrect because despite being of largest size, the EC2 instances would still experience network latency if they are not part of a Placement Group. More information on Placement Groups: A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking For more information on Placement Groups, please visit the URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
An organization has the requirement to store 10TB worth of scanned files. There is a requirement to have a search application in place which can be used to search through the scanned files. Which of the below is the best option for implementing the search facility? A. Use S3 with reduced redundancy lo store and serve the scanned files. Install a commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer. B. Model the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the scanned files with a search index. C. Use S3 with standard redundancy to store and serve the scanned files. Use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. D. Use a single-AZ RDS MySQL instance to store the search index for the scanned files and use an EC2 instance with a custom application to search based on the index.
C. Use S3 with standard redundancy to store and serve the scanned files. Use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. Answer - C This question presents following scenarios: (1) type of storage that can store large amount of data (10TB), (2) the commercial search product is at its end of life, (3) the architecture should be cost effective, highly available, and durable.Tip: Whenever a storage service that can store large amount of data with low cost, high availability, and high durability, always think about using S3. Option A is incorrect because even though it uses S3, it uses the commercial search software which is at its end of life.Option B is incorrect because striped EBS is not as durable solution as S3 and certainly not as cost effective as S3. Also, it has maintenance overhead.Option C is CORRECT because (a) it uses S3 to store the images, (b) instead of the commercial product that is at its end of life, it uses CloudSearch for query processing, and (c) with multi AZ implementation, it achieves high availability.Option D is incorrect because with single AZ RDS instance, it does not have high availability. Amazon CloudSearchWith Amazon CloudSearch, you can quickly add rich search capabilities to your website or application. You don't need to become a search expert or worry about hardware provisioning, setup, and maintenance. With a few clicks in the AWS Management Console, you can create a search domain and upload the data that you want to make searchable, and Amazon CloudSearch will automatically provision the required resources and deploy a highly tuned search index.You can easily change your search parameters, fine tune search relevance, and apply new settings at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales to meet your needs. For more information on AWS CloudSearch, please visit the below link https://aws.amazon.com/cloudsearch/
You have large EC2 instances in your AWS infrastructure which you have recently setup. These instances carry out the task of creating JPEG files and store them on a S3 bucket and occasionally need to perform high computational tasks. After close monitoring you see that the CPUs of these instances remain idle most of the time. Which of the below solutions will ensure better utilization of resources? A. Use Amazon glacier instead of S3. B. Add additional large instances by introducing a task group. C. Use T2 instances if possible. D. Ensure the application hosted on the EC2 instances uses larger files on S3 to handle more load.
C. Use T2 instances if possible. Answer - C In this scenario the problem is that the large EC2 instances are mostly remaining unused. Hence, the solution should be to use instances that can cost less but still be able to carry out occasional high computational tasks. T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. The baseline performance and ability to burst are governed by CPU Credits. T2 instances accumulate CPU Credits when they are idle, and consume CPU Credits when they are active. T2 instances are the lowest-cost Amazon EC2 instance option designed to dramatically reduce costs for applications that benefit from the ability to burst to full core performance whenever required. Option A is incorrect because there is no issue with the current use of S3.Option B is incorrect because adding another large instance is, on the contrary, an expensive solution and would add to the existing cost.Option C is CORRECT because T2 instances are cost-effective and also provide a baseline level of CPU performance with the ability to burst above the baseline whenever required.Option D is incorrect because this option is not going to make efficient use of the current instances. It will not lower the cost of the architecture. For more information on Instances types, please visit the below URL: https://aws.amazon.com/ec2/instance-types/t2/
An organization is generating digital policy files which are required by the admins for verification. Once the files are verified they may not be required in the future unless there is some compliance issue. Which is the best possible solution if the organization wants to save them in a cost-effective way? A. AWS RRS B. AWS S3 C. AWS RDS D. AWS Glacier
D. AWS Glacier Answer - D This question is basically asking you to choose a cost-effective archival solution. Amazon Glacier is most suited for such scenarios.Amazon Glacier is an extremely low-cost storage service that provides secure, durable, and flexible storage for data backup and archival. With Amazon Glacier, customers can reliably store their data for as little as $0.004 per gigabyte per month. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so that they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations. Option A and B are incorrect because they are used for real time storage. Option C is incorrect because this is a database service not an archival one.Option D is , as mentioned above, CORRECT. For more information on Glacier please visit the link - https://aws.amazon.com/glacier/details/
Which of the below-mentioned ways can be used to provide additional layers of protection to all your EC2 resources?Choose the correct answer from the below options: A. Add policies which have deny and/or allow permissions on tagged resources. B. Ensure that the proper tagging strategies have been implemented to identify all of your EC2 resources. C. Add an IP address condition to policies that specify that requests to EC2 instances should come from a specific IP address or CIDR block range. D. All actions listed here would provide additional layers of protection.
D. All actions listed here would provide additional layers of protection. Answer - D Tagging allows you to understand which resources below to test, development and production environment if done properly. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you've assigned to it. Each tag consists of a key and an optional value, both of which you define. If you have tagging, you can then also allow permissions based on the tags.You can also use IP Address conditions in IAM policies for denying access to AWS resources. { "Version": "2012-10-17", "Statement": { "Effect": "Deny", "Action": "*", "Resource": "*", "Condition": {"NotIpAddress": {"aws:SourceIp": [ "192.0.2.0/24", "203.0.113.0/24" ]}} } } Options A, B, and C all provide additional layer of protection to the EC2 instances. Hence, D is the best answer. For more information on tagging please see the below link:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
<p>Which of the following types of servers would this CloudFormation template be most appropriate for? Choose a correct answer from the below options: </p> <pre class="prettyprint linenums">{ "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "My CloudFormation Template", "Resources" : { "MyInstance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "InstanceType" : "t2.micro", "ImageId" : "ami-030f4133", "NetworkInterfaces" : [{ "AssociatePublicIpAddress" : "true", "DeviceIndex" : "0", "DeleteOnTermination" : "true", "SubnetId" : "subnet-0c2c0855", "GroupSet" : ["sg-53a4e434"] } ] } } } }</pre> <p><br></p> A. Domain Controller B. Log collection server C. Database server D. Bastion host
D. Bastion host Answer - D The bastion host needs a minimum configuration and a public IP address. The above CloudFormation template best fits this. For more information on CloudFormation please visit the below link http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-concepts.html
You decide to configure a bucket for static website hosting. As per the AWS documentation, you create a bucket named 'mybucket.com' and then you enable website hosting with an index document of 'index.html' and you leave the error document as blank. You then upload a file named 'index.html' to the bucket. After clicking on the endpoint of mybucket.com.s3-website-us-east-1.amazonaws.com you receive 403 Forbidden error. You then change the CORS configuration on the bucket so that everyone has access, however, you still receive the 403 Forbidden error. What additional step do you need to do so that the endpoint is accessible to everyone? Choose the correct option from the below: A. Register mybucket.com on Route53 B. Wait for the DNS change to propagate C. You need to add a name for the error document, because it is a required field D. Change the permissions on the index.html file also, so that everyone has access
D. Change the permissions on the index.html file also, so that everyone has access Answer - D You are receiving the 403 Forbidden Error because you do not have the permissions to view the index.html file.Option A is incorrect because this is an S3 hosted website, Route 53 does not come into picture.Option B is incorrect because it is a static website hosted on S3. This issue is not related to DNS resolution.Option C is incorrect because even if you add the error document, you will get the error, because you need to set the proper permissions.Option D is CORRECT because it sets the appropriate permissions so that the user has access to the index.html.For more information on web site hosting in S3, please visit the below link: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
Which section in your CloudFormation template would you modify to fire up different instance sizes based off of environment type (Dev/Staging/Production)? Choose the correct answer from below options: A. Outputs B. Resources C. Mappings D. Conditions
D. Conditions Answer - D The optional Conditions section includes statements that define when a resource is created or when a property is defined. For example, you can compare whether a value is equal to another value. Based on the result of that condition, you can conditionally create resources. If you have multiple conditions, separate them with commas. For more information on Cloudformation conditions please visit the below link http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
You are designing network connectivity for your fat client application. The application is designed for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet. While you do not want to publish the application on the Internet. Which network design meets the above requirements while minimizing deployment and operational costs? Choose the correct answer from the options below A. Implement AWS Direct Connect, and create a private interface to your VPC. Create a public subnet and place your application servers in it. B. Implement Elastic Load Balancing with an SSL listener that terminates the back-end connection to the application. C. Configure an IPsec VPN connection, and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it. D. Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it.
D. Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it. Answer - D Option A is incorrect because AWS Direct Connect is not a cost effective solution compared to using VPN solution.Option B is incorrect because it does not mention how the application would be accessible only to the business travelers and not to the public.Option C is incorrect because if the application servers are put in the public subnet, they would be publicly accessible via the internet.Option D is CORRECT because configuring the SSL VPN solution is cost-effective and allows access only to the business travelers and since the application servers are in private subnet, the application is not accessible via the internet.
A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer? A. Create an A record pointing to the IP address of the load balancer. B. Create a CNAME record pointing to the load balancer DNS name. C. Create a CNAME record aliased to the load balancer DNS name. D. Create an A record aliased to the load balancer DNS name.
D. Create an A record aliased to the load balancer DNS name. Answer - D Option A is incorrect because it suggests to create A record pointing to the IP address of the ELB; but, ELB's don't have predefined IP addresses.Option B and C are incorrect because you should preferably create ALIAS record rather than CNAME record. See the "More information..." section for more details.Option D is CORRECT because it creates an A record, but instead of pointing to an IP address, it ALIASES it to the DNS of the ELB. More information on ALIAS Record: Alias resource record sets are virtual records that work like CNAME records. But they differ from CNAME records in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. As such, unlike CNAME records, alias resource record sets are available to configure a zone apex (also known as a root domain or naked domain) in a dynamic environment. For more information on the zone apex, please refer to the link below: http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/setting-up-route53-zoneapex-elb.htmlFor more information on choosing between ALIAS and Non-ALIAS records, please refer to the link below:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html?console_help=true
A company has a requirement to host an application behind an AWS ELB. The application will be supporting multiple device platforms. Each device platform will need separate SSL certificates assigned to it. Which of the below options is the best setup in AWS to fulfill the above requirement? A. Setup a hybrid architecture to handle multiple SSL certificates by using separate EC2 Instance groups running web applications for different platform types running in a VPC. B. Set up one ELB for all device platforms to distribute load among multiple instance under it. Each EC2 instance implements will have different SSL certificates assigned to it. C. You just need to set single ELB. Since it supports multiple SSL certificates, it should be sufficient enough for the different device platforms D. Create multiple ELB's for each type of certificate for each device platform.
D. Create multiple ELB's for each type of certificate for each device platform. Answer - D In this scenario, the main architectural considerations are (1) web application has EC2 instances running multiple platforms such as Android, iOS etc., and (2) separate SSL certificate setups are required for different platforms.The best approach is to create separate ELBs per platform type. Option A is incorrect because it is not cost effective to handle such hybrid architecture.Option B is incorrect because if you create a single ELB for all these EC2 instances, distributing the load based on the platform type will be very cumbersome and may not be feasible at all.Option C is incorrect because even though ELB supports multiple SSL certificates, distributing the load based on the platform type will not be feasible. You will still require multiple ELBs.Option D is CORRECT because (a) it creates separate ELBs for each platform type, so the distribution of the load based on platform type becomes much more convenient and effective, and (b) each ELB can handle its SSL termination logic. See the image below: For more information on ELB, please visit the below URL https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/faqs/
<p>Explain what the following resource in a CloudFormation template does. Choose the best possible answer. </p> <pre class="prettyprint linenums">"SNSTopic" : { "Type" : "AWS::SNS::Topic", "Properties" : { "Subscription" : [{ "Protocol" : "sqs", "Endpoint" : { "Fn::GetAtt" : [ "SQSQueue", "Arn" ] } }] }</pre> A. Creates an SNS topic which allows SQS subscription endpoints to be added as a parameter on the template B. Creates an SNS topic and adds a subscription ARN endpoint for the SQS resource named Arn C. Creates an SNS topic and then invokes the call to create an SQS queue with a logical resource name of SQSQueue D. Creates an SNS topic and adds a subscription ARN endpoint for the SQS resource created under the logical name SQSQueue
D. Creates an SNS topic and adds a subscription ARN endpoint for the SQS resource created under the logical name SQSQueue Answer - D Option A is incorrect because it is not adding any parameter in the template.Option B is incorrect because it is not adding a subscription endpoint for the SQS resource named Arn. It is actually creating an SNS topic and adding a subscription ARN endpoint for the SQS resource name SQSQueue.Option C is incorrect because it does not create any SQS queue.Option D is CORRECT because it creates an SNS topic and adds a subscription ARN endpoint for the SQS resource. For more information on Fn:: GetAtt function please refer to the below link http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html
A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? Choose 3 answers: A. Use a HTTPS GET to the Amazon S3 bucket where the files are located. B. Restore by implementing a lifecycle policy on the Amazon S3 bucket. C. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours. D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot. E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance. F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot.
D. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot. E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance. F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot. Answers - D, E, & F Option A is incorrect because, all gateway-cached volume data and snapshot data is stored in Amazon S3 encrypted at rest using server-side encryption (SSE) and it cannot be visible or accessed with S3 API or any other tools. (Ref: https://forums.aws.amazon.com/thread.jspa?threadID=109748)Option B is incorrect you cannot apply Lifecycle Policies as the AWS Storage Gateway does not give you that option.Option C is incorrect because the cached volumes are never stored to Glacier.Option D is CORRECT because, you can take point-in-time snapshots of gateway volumes that are made available in the form of Amazon EBS snapshots. You can launch an EC2 instance from that.Option E is CORRECT because, you can take point-in-time snapshots of gateway volumes that are made available in the form of Amazon EBS snapshots. A new EBS volume can be created from the snapshot which can be mounted to an existing EC2 instance.Option F is CORRECT because, you can take point-in-time snapshots of gateway volumes that are made available in the form of Amazon EBS snapshots. A Volume Gateway allows you to mount iSCSI devices that you can mount to on-premise machines. You can then restore the data from the point-in-time snapshot. For more information on this topic, please refer to the AWS FAQs:https://aws.amazon.com/storagegateway/faqs/
What would happen to an RDS (Relational Database Service) multi-Availability Zone deployment if the primary DB instance fails? A. The IP address of the primary DB instance is switched to the standby DB instance. B. The primary RDS (Relational Database Service) DB instance reboots and remains as primary. C. A new DB instance is created in the standby availability zone. D. The canonical name record (CNAME) is changed from primary to standby.
D. The canonical name record (CNAME) is changed from primary to standby. Answer - D Option A is incorrect because IP address of the primary and standby instances remain same and are not changed.Option B is incorrect because the CNAME record of the primary DB instance changes to the standby instance.Option C is incorrect because there is no new instance created in the standby AZ.Option D is CORRECT because the CNAME of the primary DB instance changes to the standby instance so that there is no impact of on the application setting or any reference to the primary instance. More information on Amazon RDS Multi-AZ deployment: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. And as per the AWS documentation, the CNAME is changed to the standby DB when the primary one fails. For more information on Multi-AZ RDS, please visit the link: https://aws.amazon.com/rds/details/multi-az/
Your team is excited about the use of AWS because now they have access to "programmable Infrastructure". You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development, test, QA , and production). Which approach addresses this requirement? A. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure. B. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure. C. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure. D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure. Answer - D You can use AWS Cloud Formation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer. Option A is incorrect because Cost Allocation Reports is not helpful for the purpose of the question. Option B is incorrect because CloudWatch is used for monitoring the metrics pertaining to different AWS resources. Option C is incorrect because it does not have the concept of programmable Infrastructure. Option D is CORRECT because AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. For more information on CloudFormation, please visit the link: https://aws.amazon.com/cloudformation/
Which of the following reports in CloudFront can help find out the most popular requested objects at an edge location? Choose an answer from the options given below A. Cache Statistics B. Most requested C. Most Referred D. Top Referrers E. Popular Object
E. Popular Object Answer - E The Amazon CloudFront console can display a list of the 50 most popular objects for a distribution during a specified date range in the previous 60 days. Data for the Popular Objects report is drawn from the same source as CloudFront access logs. To get an accurate count of the top 50 objects, CloudFront counts the requests for all of your objects in 10-minute intervals beginning at midnight and keeps a running total of the top 150 objects for the next 24 hours. For more information on the popular objects report please visit the link http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/popular-objects-report.html
How can you secure data at rest on an EBS volume? A. Attach the volume to an instance using EC2's SSL interface. B. Write the data randomly instead of sequentially. C. Encrypt the volume using the S3 server-side encryption service. D. Create an IAM policy that restricts read and write access to the volume. E. Use an encrypted file system on top of the EBS volume.
E. Use an encrypted file system on top of the EBS volume. Answer - E. In order to secure data at rest on an EBS volume, you either have to encrypt the volume when it is being created or encrypt the data after the volume is created. Hence, option E is CORRECT. For more information on EBS encryption, please refer to the link http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html