Ultimate AWS Cloud Computing - Exam Questions Part I

Ace your homework & exams now with Quizwiz!

You plan to deploy an application on AWS. This application needs to be PCI Compliant. Which of the below steps are needed to ensure compliance? Choose 3 answers from the below: A. Choose AWS services which are PCI Compliant B. Ensure the right steps are taken during application development for PCI Compliance C. Ensure the AWS Services are made PCI Compliant D. Do an audit after the deployment of the application for PCI Compliance

Answer - A, B and D C is incorrect. Some AWS services are PCI compliant but additional steps relating to Development may be required by the Customer to be fully PCI compliant.

Which AWS service provides infrastructure security optimization recommendations? A. AWS Application Programming Interface(API) B. Reserved Instances C. AWS Trusted Advisor D. Amazon Elastic Compute Cloud (Amazon EC2) SpotFleet

Answer - C The AWS documentation mentions the following An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices For more information on the AWS Trusted Advisor, please refer to the below URL: https://aws.amazon.com/premiumsupport/trustedadvisor/ Choices A, B, and D are incorrect. They are not related to infrastructure security optimization.

Which of the following AWS services is suitable to be used as a fully managed data warehouse? A. Amazon Athena B. Amazon RedShift C. Amazon CloudWatch D. Amazon Warehouse

Correct Answer - B Amazon Redshift is a fully managed, petabyte-scale data warehouse service. https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html Option A is INCORRECT because Amazon Athena is used to query data and analyze big data in S3. Option C is INCORRECT because Amazon CloudWatch is used to monitor AWS resources, collect metrics, configure alarms, etc. Option D is INCORRECT because there is no such AWS service.

There is a requirement to store objects. The objects must be downloadable via a URL. Which storage option would you choose? A. Amazon S3 B. Amazon Glacier C. Amazon Storage Gateway D. Amazon EBS

Answer - A Amazon S3 is the perfect storage option. It also provides the facility of assigning a URL to each object which can be used to download the object. For more information on AWS S3, please visit the Link: https://aws.amazon.com/s3/ B is incorrect. Glacier is for archival and long-term storage. This question is to check the user understanding of AWS S3 service terminology and use cases. If you see the question, they mentioned "Object" and those should be downloadable via a URL. It's not possible with EBS

The Trusted Advisor service provides insight regarding which five categories of an AWS account? A. Security, fault tolerance, high availability, connectivity and Service Limits B. Security, access control, high availability, performance and Service Limits C. Performance, cost optimization, security, fault tolerance and Service Limits D. Performance, cost optimization, access control, connectivity and Service Limits

Answer - C For more information on the AWS Trusted Advisor, please visit the Link: https://aws.amazon.com/premiumsupport/trustedadvisor/

What is the value of having AWS Cloud services accessible through an Application Programming Interface (API)? A. It allows developers to work with AWS resources programmatically B. AWS resources will always be cost-optimized C. All application testing can be managed by AWS. D. Customer-owned, on-premises infrastructure becomes programmable.

Answer - A It allows developers to easily work with the various AWS resources programmatically For more information on the various programming tools available for AWS, please refer to the below URL: https://aws.amazon.com/tools/ Option B is incorrect. The AWS API does not reduce cost. Option C is incorrect. API allows the customer's developers to work with resources, not AWS. Options D is incorrect. The AWS API only allows the customer to manage AWS resources, not on-premise.

A weather tracking system is designed to track weather conditions of any particular flight route. Flight travelers all over the world make use of this information prior to booking their flights. Travelers expect quick turnaround time in which the weather display & flight booking will happen which is critical to their business. You have designed this website and are using AWS Route 53 DNS. The routing policy that you will apply to this website is A. GeoLocation routing policy B. Failover routing policy C. Multivalue answer routing policy D. Latency based routing policy

Answer: D On reading the scenario carefully, we can see here that the website's performance is of prime importance to its users. It gives them a lot of business value, enabling them to choose their flight paths and make flight bookings on time. So, "Latency based routing" is the best answer to this scenario. Option A is incorrect because GeoLocation routing is often used to localize content and present the website in the language of its users. Geolocation routing lets you choose the resources that serve your traffic based on your users' geographic location, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region irrespective of latency in that region. Option B is incorrect because Failover routing is usually used in Disaster Recovery scenarios where an Active-Passive Disaster recovery configuration is present & the Passive resource that was originally the Backup resource has now become the Active resource due to the original Active resource being unhealthy. Option C is incorrect since Multivalve answer routing provides the ability to return multiple health-checkable IP addresses which is a way to use DNS to improve availability and load balancing. Option D is CORRECT since Latency based routing always routes DNS queries to the best performing website (region)irrespective of what happens in the Amazon infrastructure, Internet. Going back to our scenario, if we have ELB load balancers in the US West (Oregon) region and the Asia Pacific(Mumbai) region for the Weather tracking & Airline Ticketing website and if a user from London enters the name of your domain in a browser, the following things will happen: 1. DNS routes the query to a Route 53 name server. 2. Route 53 refers to its data on latency between London and the Mumbai region and between London and the Oregon region. 3. If latency is lower between the London and Oregon regions, Route 53 responds to the query with the Oregon load balancer's IP address. If latency is lower between London and the Mumbai region, Route 53responds with the Mumbai load balancer's IP address. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency https://youtu.be/BtiS0QyiTK8

Which of the following is a factor when calculating Total Cost of Ownership (TCO) for the AWS Cloud? A. The number of servers migrated to AWS B. The number of users migrated to AWS C. The number of passwords migrated to AWS D. The number of keys migrated to AWS

Answer - A Running servers will incur costs. The number of running servers is one factor of Server Costs; a key component of AWS's Total Cost of Ownership (TCO). For more information on AWS TCO, please refer to the below URL: https://aws.amazon.com/blogs/aws/the-new-aws-tco-calculator/ B, C and D are incorrect. These are not factors in AWS's Total Cost of Ownership.

Which of the following is the concept of the Elastic load balancer? A. To distribute traffic to multiple EC2 Instances B. To scale up EC2 Instances C. To distribute traffic to AWS resources across multiple regions D. To increase the size of the EC2 Instance based on demand

Answer - A The AWS Documentation mentions the following A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. For more information on the Elastic Load Balancer service, please refer to the below URL: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html

Which of the following is the concept of Autoscaling A. To scale out resources based on demand B. To distribute traffic to multiple EC2 Instances C. To distribute traffic to AWS resources across multiple regions D. To increase the size of the EC2 Instance based on demand

Answer - A The AWS Documentation mentions the following AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it's easy to setup application scaling for multiple resources across multiple services in minutes. For more information on the Auto Scaling service, please refer to the below URL: https://aws.amazon.com/autoscaling/

Which AWS service automates infrastructure provisioning and administrative tasks for an analytical data warehouse? A. Amazon Redshift B. Amazon DynamoDB C. Amazon ElastiCache D. Amazon Aurora

Answer - A The AWS documentation mentions the following Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. For more information on AWS Redshift, please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html Choices B, C and D are incorrect. Amazon Redshift is the only data warehousing service out of the choices below.

Which of the following statements about scalability is most accurate? Choose 2 answers from the options given below A. A scalable system diverts traffic based on demand B. A scalable system diverts traffic to instances with the least load C. A scalable system diverts traffic across multiple regions D. A scalable system diverts traffic to instances with higher capacity

Answer - A and B Scalability would scale-out ( increase the number of instances ) or scale in ( decrease the number of instances ) Also when one of the instances is with the least load it will divert the traffic to the least loaded instance, so that it takes more load Both of the above are taken care of by 'autoscaling' We just need to enable autoscaling for our instances and traffic would automatically diverted based on demand or load For more information on AWS Autoscaling service, please refer to the below URL: https://aws.amazon.com/autoscaling/ https://aws.amazon.com/elasticloadbalancing/

Which of the following is the responsibility of the customer when ensuring that data on EBS volumes is kept safe A. Deleting the data when the device is destroyed B. Creating EBS snapshots C. Attaching volumes to EC2 Instances D. Creating copies of EBS Volumes

Answer - B Creating snapshots of EBS Volumes can help ensure that you have a backup of your EBS volume in place. For more information on EBS Snapshots, please refer to the below URL: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

Which of the following services can be used as a web application firewall in AWS. A. AWS EC2 B. AWS WAF C. AWS Firewall D. AWS Protection

Answer - B The AWS Documentation mentions the following AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. AWS WAF also lets you control access to your content. For more information on AWS WAF, please refer to the below URL: https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html

A Disaster Recovery Strategy on AWS should be based on launching resources in a separate : A. Subnet B. AWS Region C. Security Group D. Amazon Virtual Private Cloud (Amazon VPC)

Answer - B The AWS Documentation mentions the following Businesses are using the AWS cloud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. The AWS cloud supports many popular disaster recovery (DR) architectures from "pilot light" environments that may be suitable for small customer workload data center failures to "hot standby" environments that enable rapid failover at scale. With data centers in Regions all around the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data. For more information on enabling AWS Disaster Recovery, please refer to the below URL: https://aws.amazon.com/disaster-recovery/ A, C and D are incorrect. A subnet, security group and VPC will not add the additional redundancy required for Disaster Recovery.

Amazon Elastic Compute Cloud (Amazon EC2) Spot instances would be most appropriate for which of the following scenarios: A. Workloads that are only run in the morning and stopped at night B. Workloads where the availability of the Amazon EC2 instances can be flexible C. Workloads that need to run for long periods of time without interruption D. Workloads that are critical and need Amazon EC2 instances with termination protection

Answer - B The AWS documentation mentions the following Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks For more information on AWS Spot Instances, please refer to the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html A, C and D are incorrect. Since spot instances can be terminated by Amazon depending on market prices, it cannot be guaranteed to run during specific period of need, computing with time constraints or highly critical computing.

Choose the disaster recovery deployment mechanism that has the lowest downtime. A. Pilot light B. Warm standby C. Backup Restore D. Devops

Answer - B The below snapshot from the AWS Documentation shows the spectrum of the Disaster recovery methods. If you go to the further end of the spectrum you have the least time for downtime for the users. For more information on Disaster recovery techniques, please refer to the below URL: https://aws.amazon.com/blogs/aws/new-whitepaper-use-aws-for-disaster-recovery/

According to the AWS, what is the benefit of Elasticity? A. Minimize storage requirements by reducing logging and auditing activities B. Create systems that scale to the required capacity based on changes in demand C. Enable AWS to automatically select the most cost-effective services. D. Accelerate the design process because recovery from failure is automated, reducing the need for testing

Answer - B The concept of Elasticity is the means of an application having the ability to scale up and scale down based on demand. An example of such a service is the Autoscaling service. For more information on AWS Autoscaling service, please refer to the below URL: https://aws.amazon.com/autoscaling/ A, C and D are incorrect. Elasticity will not have positive effects on storage, cost or design agility.

The main benefit of decoupling an application is to: A. Create a tightly integrated application B. Reduce inter-dependencies so failures do not impact other components C. Enable data synchronization across the web application layer. D. Have the ability to execute automated bootstrapping actions.

Answer - B The entire concept of decoupling components is to ensure that the different components of an applications can be managed and maintained separately. If all components are tightly coupled then when one component goes down , the entire application would do down. Hence it is always a better design practice to decouple application components. For more information on a decoupled architecture, please refer to the below URL: http://whatis.techtarget.com/definition/decoupled-architecture A is incorrect. Decoupling would be the inverse of creating tight integration. C and D are incorrect. Decoupling has no impact on these choices.

What best describes the "Principle of Least Privilege"? Choose the correct answer from the options given below: A. All users should have the same baseline permissions granted to them to use basic AWS services. B. Users should be granted permission to access only resources they need to do their assigned job. C. Users should submit all access requests in written form so that there is a paper trail of who needs access to different AWS resources. D. Users should always have a little more access granted to them then they need, just in case they end up needed it in the future.

Answer - B The principle means giving a user account only those privileges which are essential to perform its intended function. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications. For more information on principle of least privilege, please refer to the following Link: https://en.wikipedia.org/wiki/Principle_of_least_privilege A, C and D are incorrect. These actions would not adhere to the Principle of Least Privilege.

There is an external audit being carried out on your company. The IT auditor needs to have a log of all access to the AWS resources in the company's account. Which of the below services can assist in providing these details? A. AWS CloudWatch B. AWS CloudTrail C. AWS EC2 D. AWS SNS

Answer - B Using CloudTrail , one can monitor all the API activity conducted on all AWS services. The AWS Documentation additionally mentions the following AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. For more information on AWS CloudTrail, please refer to the below URL: https://aws.amazon.com/cloudtrail/

Which AWS services can be used to store files? Choose 2 answers from the options given below: A. Amazon CloudWatch B. Amazon Simple Storage Service (Amazon S3) C. Amazon Elastic Block Store (Amazon EBS) D. AWS Config E. Amazon Athena

Answer - B and C The AWS documentation mentions the following Amazon S3 is object storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stor es data for millions of applications used by market leaders in every industry. For more information on the Simple Storage Service, please refer to the below URL: https://aws.amazon.com/s3/ Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instance s in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. For more information on Amazon EBS, please refer to the below URL: https://aws.amazon.com/ebs/ Answer A is incorrect. Amazon CloudWatch is used for performance monitoring. Answer D is incorrect. AWS Config is used to audit and monitor configuration changes. Answer E is incorrect. Amazon Athena is a serverless query service used to analyze BigData stored in S3.

Which of the following are advantages of having infrastructure hosted on the AWS Cloud? Choose 2 answers from the options given below. A. Having complete control over the physical infrastructure B. Having the pay as you go model C. No Upfront costs D. Having no need to worry about security

Answer - B and C The Physical infrastructure is a responsibility of AWS and not with the customer. Hence it is not an advantage of moving to the AWS Cloud. And AWS provides security mechanisms, but even the responsibility of security lies with the customer.

Your company has a set of EC2 Instances hosted in AWS. There is a requirement to create snapshots from the EBS volumes attached to these EC2 Instances in another geographical location. As per this requirement , where would you create the snapshots A. In another Availability Zone B. In another data center C. In another Region D. In another Edge location

Answer - C Regions correspond to different geographic locations in AWS. For more information on Regions and Availability Zones in AWS, please refer to the below URL: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html

What is the AWS feature that enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket? A. File Transfer B. HTTP Transfer C. Amazon S3 Transfer Acceleration D. S3 Acceleration

Answer - C The AWS Documentation mentions the following Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. For more information on S3 transfer acceleration, please visit the Link: http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html Option A, B and D are incorrect. These features deal with transferring data but not between clients and an S3 bucket.

There is a requirement to host a database server for a minimum period of one year. Which of the following would result in the least cost? A. Spot Instances B. On-Demand C. No Upfront costs Reserved D. Partial Upfront costs Reserved

Answer - D If the database is going to be used for a minimum of one year at least , then it is better to get Reserved Instances. You can save on costs , and if you use a partial upfront options , you can get a better discount For more information on AWS Reserved Instances, please visit the Link: https://aws.amazon.com/ec2/pricing/reserved-instances/ A is incorrect. Spot instances can be terminated with fluctuations in market prices. Unless the question specifies a use case where high availability is not a requirement, this cannot be assumed. B is incorrect. On-Demand is not the most cost efficient solution. C is incorrect. No upfront payment is required , however its costlier option than Partial/All upfront payment. For more information on Reserved Instances Payment option, please check below AWS Docs: https://docs.aws.amazon.com/aws-technical-content/latest/cost-optimization-reservation-models/reserved-instance-pa yment-options.html Note: Reserved Instances do not renew automatically; when they expire, you can continue using the EC2 instance without interruption, but you are charged On-Demand rates. In the above example, when the Reserved Instances that cover the T2 and C4 instances expire, you go back to paying the On-Demand rates until you terminate the instances or purchase new Reserved Instances that match the instance attributes. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html

Which tool can you use to forecast your AWS spending? A. AWS Organizations B. Amazon Dev Pay C. AWS Trusted Advisor D. AWS Cost Explorer

Answer - D The AWS Documentation mentions the following Cost Explorer is a free tool that you can use to view your costs. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You also can specify time ranges for the data, and view time data by day or by month. For more information on the AWS Cost Explorer, please refer to the below URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-what-is.html A, B and C are incorrect. These services do not relate to billing and cost.

Which tool can you use to forecast your AWS spending? A. AWS Organizations B. Amazon Dev Pay C. AWS Trusted Advisor D. AWS Cost Explorer

Answer - D The AWS Documentation mentions the following. Cost Explorer is a free tool that you can use to view your costs. You can view data up to the last 12 months. You can forecast how much you are likely to spend for the next 12 months and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You also can specify time ranges for the data and view time data by day or by month. For more information on the AWS Cost Explorer, please refer to the below URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-what-is.html A, B and C are incorrect. These services do not relate to billing and cost.

A company needs to know which user was responsible for terminating several critical Amazon Elastic Compute Cloud (EC2) Instances. Where can the customer find this information? A. AWS Trusted Advisor B. Amazon EC2 instance usage report C. Amazon CloudWatch D. AWS CloudTrail logs

Answer - D Using CloudTrail , one can monitor all the API activity conducted on all AWS services. The AWS Documentation additionally mentions the following AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. For more information on AWS CloudTrail, please refer to the below URL: https://aws.amazon.com/cloudtrail/ Answers A, B and C are incorrect. CloudWatch is the most appropriate place to monitor activity in AWS.

As per the AWS Acceptable Use Policy, penetration testing of EC2 instances: A. May be performed by AWS, and will be performed by AWS upon customer request. B. May be performed by AWS, and is periodically performed by AWS. C. Are expressly prohibited under all circumstances. D. Can be performed by the customer, provided they work with the list of services mentioned by AWS. E. May be performed by the customer on their owninstances, only if performed from EC2 instances.

Answer - D. You no need to take prior authorization from AWS before doing a penetration test on EC2 Instances. Please refer to the below URL: for more details. https://aws.amazon.com/security/penetration-testing/ A, B and C are incorrect. AWS says as below: ############ Permitted Services - You're welcome to conduct security assessments against AWS resources that you own if they make use of the services listed below. Were constantly updating this list; click here to leave us feedback, or request for inclusion of additional services: o Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers o Amazon RDS o Amazon CloudFront o Amazon Aurora o Amazon API Gateways o AWS Lambda and Lambda Edge functions o Amazon Lightsail resources o Amazon Elastic Beanstalk environments ###########

While making changes to AWS resources e.g. adding a new Security Group Ingress rule, I need to capture & record all these changes that will be helpful during an audit. Which of the following AWS service helps me do that? A. AWS Trusted Advisor B. AWS CloudWatch C. AWS Config D. AWS CloudFormation

Answer: C Option A is incorrect because AWS Trusted Advisor cannot record the details of configuration changes in the AWS account. Option B is incorrect because CloudWatch is a monitoring tool that captures different metrics like CPU utilization, Memory Utilization etc. Once the data is captured, they can then be used for creating dashboards for displaying usage patterns, creating alarms for automating resource creation, e.g. creating a new EC2 instance due to average CPU utilization of an Auto Scaling group going above 70% Option C is CORRECT . AWS Config records & captures all configuration changes done to AWS resources using the Configuration Recorder. Configuration Items crated by AWS Config can be sent to S3 to be stored as log files. These logfiles can be retained depending on the S3 lifecycle policies defined & can be referred to during any audit. Using an automated configuration management tool helps an Organization to track compliance of its resources elegantly. Option D is incorrect because AWS CloudFormation is used for automating the creation of AWS resources in Organizations that are huge and use a complex infrastructure that may be difficult to create manually. https://aws.amazon.com/config/ https://youtu.be/kcwy_DWU8ao

AWS Organizations help manage multiple accounts effectively in a large enterprise. Which of the following statements related to AWS Organizations are correct? (Select TWO.) A. An Organizational Unit(OU) can have only one parent. B. An account can be a member of multiple Organizational Units (OU). C. An SCP policy only impacts a particular AWS account even if it is applied at the root account. D. Organizational level policies are known as Service Control Policies. E. Service Control Policies (SCPs) can only allow actions instead of deny actions.

Answers: A, D Option A is CORRECT. An Organizational Unit(OU) can have a single branch going up, e.g. It can either inherit a root or another OU but not both as shown in the figure below. Option B is incorrect since an Account can belong to only one OU. Option C is incorrect. A Policy applied at the Root is applied throughout the Organization i.e. to all its OU's and its Accounts. A Policy applied to the OU level applies to all OU's and Accounts under those OU's. A Policy applied at the Account level is applied to only that Account. Referring to the figure above, when a Policy is applied to the OU under the Root, it will also be applied to the OU below it & Accounts B, C, D. When a policy is applied to Account C, it will apply to only that account. Option D is CORRECT. AWS Organizations automate creation of AWS Accounts, OUs and their hierarchy. They use Service Control Policies (SCP) at OUs. SCPs are different from IAM in the sense that they can be applied to the Organization level. They override any IAM policies that are defined at an Account level & may also restrict the IAM policy defined. AWS Organizations do not cancel the need for IAM. It compliments what IAM can do by consolidating and centrally managing a lot of things that happen. AWS Organizations is not an authority for granting permissions, but it is an authority to approve/disapprove permissions given by IAM. Option E is incorrect . SCPs can be configured to allow or deny services and actions. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html

Which of the following is WRONG for NoSQL databases? (Select TWO.) A. They are not relational. B. They need to have a well defined schema. C. DynamoDB Transactions is NOT atomicity, consistency, isolation, and durability (ACID) compliant. D. NoSQL databases are horizontally scalable. E. A patient's record in a hospital system with changing data for every visit is a good candidate to be modelled using a NoSQL database.

Answers: B, C Option A is incorrect since NoSQL databases are not relational. They support data that are semi-structured orunstructured as compared to the structured nature of relational databases like Oracle, MySQL. Option B is CORRECT. NoSQL databases do not support a predefined schema like a relational database does (e.g. Arecord of type Book will have a fixed set of attributes defining a schema like ID, Name, Description, Author). Not defininga rigid schema allows NoSQL databases the flexibility to support semi-structured & unstructured data. Option C is CORRECT. DynamoDB transactions provide developers atomicity, consistency, isolation, and durability(ACID) across one or more tables within a single AWS account and region. The details can be found inhttps://aws.amazon.com/cn/blogs/aws/new-amazon-dynamodb-transactions/. Option D is incorrect . NoSQL databases are usually run in compute node clusters with data being partitioned across these nodes. Partitioning happens automatically with an increase in database size resulting in horizontal scaling. Option E is incorrect. A Patient's medical record during hospital visits may be updated by multiple people e.g. Billing information, Medicines, BP, Height, Weight etc...Defining a person's medical history in a structured format will be impractical & inefficient. Another way to look at a patient's medical record is as a set of documents with a new document being added during every visit with additional information. https://youtu.be/eKyS9rvbj40 https://aws.amazon.com/products/databases/

During an organization's information systems audit, the administrator is requested to provide a dossier of security and compliance reports as well as online service agreements that exist between the organization and AWS. Which service can they utilize to acquire this information? A. AWS Artifact B. AWS Resource Center C. AWS Service Catalog D. AWS Directory Service

Correct Answer - A AWS Artifact is a comprehensive resource center for access to AWS' auditor issued reports as well as security and compliance documentation from several renowned independent standards organizations. https://aws.amazon.com/artifact/ Option B. is INCORRECT because the AWS Resource Center a repository of tutorials, whitepapers, digital trainings and project Use cases that aid in learning the core concepts of Amazon Web Services. https://aws.amazon.com/getting-started/ Option C. is INCORRECT because AWS Service Catalog allows organizations to create and save their own IT service catalogs for use, they have to be approved by AWS. IT service catalogs can be multi-tiered application architectures. https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html Option D. is INCORRECT because AWS Directory Service is an AWS tool that provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory with other AWS services. https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html

An administrator would like to prepare a report that will be presented to the auditing team. The report is meant to depict that the organizations' cloud infrastructure has followed the widely accepted industry standards of deployment, maintenance and monitoring. Which tool can they use to assist them? A. AWS CloudTrail B. AWS Trusted Advisor C. AWS Organizations D. AWS Total Cost of Ownership

Correct Answer - A AWS CloudTrail is a service that primarily tracks governance, compliance, operational auditing, and risk auditing of your AWS account. CloudTrail logs continuously monitors, and retains account activity related to actions across all AWS infrastructure. CloudTrail provides event history of AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. https://aws.amazon.com/cloudtrail/ Option B. is INCORRECT because AWS Trusted Advisor offers best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. In the scenario, the administrator can be able to run a report on all of the organization's infrastructure and present the information to the auditors accordingly. https://aws.amazon.com/premiumsupport/technology/trusted-advisor/ Option C. INCORRECT because AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS ac counts. https://aws.amazon.com/organizations/ Option D. INCORRECT because AWS Total Cost of Ownership (TCO) helps in the assessment of cost of infrastructure versus using cloud services. The resources helps by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers you to invest in the capacity you need and use only when the business requires it. TCO calculators allow for the estimation of the cost savings when using AWS and provide a detailed set of reports that can be used in executive presentations. https://aws.amazon.com/tco-calculator/

A group of developers for a startup company store their source code and binary files on a shared open source repository platform which is publicly accessible over the internet. They have embarked on a new project in which their client requires high confidentiality and security on all development assets. What AWS service can the developers use to meet the requirement? A. AWS CodeCommit B. AWS CodeDeploy C. AWS Lambda D. AWS CodeStar

Correct Answer - A AWS CodeCommit is a managed source control service that can be used as a data store to store source code, binaries, scripts, HTML pages and images which are accessible over the internet. CodeCommit encrypts files in transit and at rest which fulfills the additional client requirement (high confidentiality & security) mentioned in the question. Also, CodeCommit works well with Git tools, and other existing CI/CD tools. https://aws.amazon.com/codecommit/ Option B. is INCORRECT because AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html Option C. is INCORRECT because AWS Lambda will allow the developers in the scenario to run code without provisioning or managing servers. The company would pay only for the compute time consumed - there would be no charge when your code is not running. https://aws.amazon.com/lambda/ Option D. is INCORRECT because AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. https://aws.amazon.com/codestar/

An administrator would like to efficiently automate the replication and deployment of a specific software configuration existent on one EC2 instance onto four hundred others. Which AWS service is BEST suited for this implementation? A. AWS OpsWorks B. AWS Beanstalk C. AWS Launch Configuration D. AWS Auto-scaling

Correct Answer - A AWS OpsWorks provides a fully managed configuration automation and management service of Chef and Puppet. These platforms will allow for the use of code to automate the configuration of the EC2 instances, including replication as stated in the scenario. With Chef and Puppet, OpsWorks allows for the automation of how servers are configured, deployed, and managed across Amazon EC2 instances or on-premises compute environments. Option B. is INCORRECT because AWS Elastic Beanstalk is a service for deploying and scaling web applications and services developed with Java NET PHP Node js Python Ruby Go and Docker on familiar servers such as Apache Nginx Passenger, and IIS. It will not be able to autonomously replicate a specific software service configuration onto a multitude of EC2 instances. https://aws.amazon.com/elasticbeanstalk/ Option C. is INCORRECT because a Launch Configuration is primarily an instance configuration template that an Auto Scaling group uses to launch EC2 instances. It is the blueprint of the Auto Scaling group and determines the configuration output of each instance. https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html Option D. is INCORRECT because Auto-scaling is responsive to preset threshold levels in a deployment environment. It does not offer a fully managed functionality that allows for the mass replication of a specific configuration as the scenario outlines. CloudWatch parameters, it monitors applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. https://aws.amazon.com/autoscaling/

An administrator would like to automate the replication and deployment of a specific software configuration existent on one EC2 instance onto four hundred others. Which AWS service is BEST suited for this implementation? A. AWS OpsWorks B. AWS Beanstalk C. AWS Launch Configuration D. AWS Auto-scaling

Correct Answer - A AWS OpsWorks provides a fully managed configuration automation and management service of Chef and Puppet. These platforms will allow for the use of code to automate the configurations of EC2 instances, including replication, as stated in the scenario. With Chef and Puppet, OpsWorks will allow for the automation of how servers are configured, deployed and managed across Amazon EC2 instances or on-premises compute environments. Option B is INCORRECT because AWS Elastic Beanstalk is a service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. It will not replicate a specific software service configuration onto a multitude of EC2 instances autonomously. https://aws.amazon.com/elasticbeanstalk/ Option C is INCORRECT because a Launch Configuration is primarily an instance configuration template that an AutoScaling group uses to launch EC2 instances. It is the blueprint of the Auto Scaling group. It also determines the configuration output of each instance. https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html Option D is INCORRECT because Auto-scaling is responsive to preset threshold levels in a deployment environment. It does not offer a fully managed functionality that allows for the mass replication of a specific configuration, as the scenario outlines.

What can be termed as a user-defined label that has a key-value pair of variable character length. It is assigned to AWS resources as metadata for administration and management purposes? A. Resource Tag B. Resource Group C. Resource Flag D. Tag key

Correct Answer - A AWS Resource tags are critical component when architecting in the cloud, they create an identifying mechanism for the user to group, classify and order all their provisioned resources appropriately. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html Option B. is INCORRECT because AWS Resource groups enable the ordering of AWS resources into logical groupings. Resources can be ordered by application, environment or software component. Option C. is INCORRECT because flags are used in AWS CloudFormation, the option is inaccurate. Option D. is INCORRECT because a tag key is only part of what makes up a resource tag, each resource tag will have a key and value string.

Which AWS service gives the user the ability to group AWS resources across different AWS Regions by application and then collectively view their operational data for monitoring purposes? A. Systems Manager B. Management Console C. Resource Groups D. Resource Access Manager (AWS RAM)

Correct Answer - A AWS Systems Manager allows users to gain control of their AWS resources by unifying services into a user interface. One in which they can be able to view, automate and monitor operational tasks. https://aws.amazon.com/systems-manager/ https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html Option B. is incorrect because the Manage Console is a web-based graphical user interface that users interact with when administering AWS services and resources. https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html?id=docs_gateway#learn-whats-new Option C. is incorrect because Resource Groups are a collection of AWS resources within a single AWS Region. In the scenario, the AWS resources are in different AWS Regions. https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html Option D. is incorrect because Resource Access Manager (AWS RAM) allows users to share resources with other AWS accounts or via AWS Organizations. AWS RAM can be used to collate a set of AWS resources across multiple AWS accounts in order to share capacity. https://docs.aws.amazon.com/ram/latest/userguide/what-is.html

What is the best-suited file storage option for use when an administrator is looking to deploy shared file access, linux-based workloads which will require up to petabytes of data stores? A. Amazon EFS B. Amazon S3 C. AWS Snowball D. Amazon EBS

Correct Answer - A Amazon Elastic File Storage (EFS) is the best-suited file storage option for the described scenario as it is designed for shared file access as well as scaling to petabyte data store. https://aws.amazon.com/efs/when-to-choose-efs/ Option B. is incorrect because Amazon S3 is an object data store which is not suitable for deploying linux-based workloads as the scenario outlines. Option C. is incorrect because AWS Snowball is a data transport solution and data migration which is not suitable for deploying shared file access builds. https://aws.amazon.com/snowball/ Option D. is incorrect because Amazon Elastic Block store is block storage service for access by an EC2 instance but without the capability of a share file access. Applications that utilize persistent or dedicated block storage for a single instance can use Amazon EBS storage. https://aws.amazon.com/efs/when-to-choose-efs/

A business analyst would like to move away from creating complex database queries and static spreadsheets when generating regular reports for high-level management. They would like to dynamically publish insightful, graphically appealing reports with interactive dashboards. Which service can they use to accomplish this? A. Amazon QuickSight B. Business intelligence on Amazon Redshift C. Amazon CloudWatch dashboards D. Amazon Athena integrated with Amazon Glue

Correct Answer - A Amazon QuickSight is the most appropriate service to utilize in the scenario, it is a fully-managed service that allows for insightful business intelligence reporting, with creative methods of data delivery including graphical and interactive dashboards. QuickSight includes machine learning which allows users to discover inconspicuous trends and patterns on their datasets. https://aws.amazon.com/quicksight/ Option B. is INCORRECT because Amazon Redshift service is a data warehouse and will not meet the requirements of interactive dashboards and dynamic means of delivering reports. Option C. is INCORRECT because Amazon CloudWatch dashboards will not accomplish the requirements of the scenario, they are used to monitor AWS system resources and infrastructure services, though they are customizable and present information in a graphical manner. Option D. is INCORRECT because Amazon Athena is query service that allows for easy data analysis in Amazon S3 by using standard SQL. This services will not meet the requirements of the scenario.

In Amazon S3, what is the difference between lifecycle policies and intelligent tiering? A. Lifecycle policies are not dependent on access patterns as is the case with intelligent tiering, instead they are preconfigured with a transition rule. B. Intelligent tiering is an object storage class which is not dependent on access patterns, it uses a pre-configured transition rule. C. When transitioning objects into different storage classes, intelligent tiering is automatic whilst lifecycle policies have to be manually triggered. D. Lifecycle policies cannot be configured to permanently delete objects from an S3 bucket whilst intelligent tiering can do so if versioning is turned on.

Correct Answer - A Within Amazon S3, lifecycle policies are used to automatically transition objects through different storage classes in accordance to a preconfigured rule. This rule will typically move the object regardless of how frequently it is accessed. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Option B. is incorrect because intelligent tiering uses access patterns when determining transitioning action. https://aws.amazon.com/s3/storage-classes/ Option C. is incorrect because lifecycle policies are automatically triggered when the 'days after creation' period lapses. Option D. is incorrect because lifecycle policies can indeed be configured to permanently delete objects. Intelligent tiering cannot be configured to permanently delete objects even if versioning is turned on for the objects.

Which statements accurately distinguish AWS Cloud9 from AWS Lambda. (Select TWO). A. With AWS Cloud9, developers can share in real-time a development environment with just a few clicks and pair program together. This is not possible with AWS Lambda B. AWS Lambda can be used to create functions that run in AWS Cloud9 IDE C. AWS Lambda functions are dependent on the Amazon API Gateway whilst AWS Cloud9 IDE can write, run, and debug any code D. AWS Cloud9 provides an online platform to write, run, and debug code from the browser, whilst AWS Lambda functions can be installed locally E. Without locally installing an integrated development environment, AWS Cloud9 will not run.

Correct Answer - A, B AWS Cloud9 IDE has a real-time collaborative function that allows developers to share environments amongst teams and live code. It is accurate that functions written in AWS Lambda environment can be run and debugged in the AWS Cloud9 IDE. https://docs.aws.amazon.com/cloud9/latest/user-guide/lambda-functions.html Option C. is INCORRECT because functions written in AWS Lambda can still be invoked or triggered by a service without going through an API gateway Option D. is INCORRECT because AWS Lambda is a serverless environment service that does not require any resources to be installed locally on a server or desktop Option E. is INCORRECT because AWS Cloud9 is an integrated development environment that runs off Amazon infrastructure in the cloud and is fully functional without the need to run any resources locally on any server or desktop.

Which of the following features can be used to preview changes to be made to an AWS resource which will be deployed using the AWS CloudFormation template? A. AWS CloudFormation Drift Detection B. AWS CloudFormation Change Sets C. AWS CloudFormation Stack Sets D. AWS CloudFormation Intrinsic Functions

Correct Answer - B AWS CloudFormation Change Set can be used to preview changes to AWS resources when a stack is executed. Option A is incorrect as AWS CloudFormation Drift Detection is used to detect any changes made to resources outside of CloudFormation templates. It would not be able to preview changes that will be made by CloudFormation Templates. Option C is incorrect as these are groups of stacks that are managed together. Option D is incorrect as these Intrinsic Functions are used for assigning values to properties in CloudFormation templates. https://aws.amazon.com/cloudformation/features/

An administrator noticed a consistent spike in processor and memory activity on the organization's web servers that host a large web application, this was after installing Secure Socket Layer/Transport Layer Security (SSL/TLS) for security. This increased activity degraded the web application's responsiveness. What is the best-practice solution to resolve the situation? A. Migrate the web application onto m4.4xlarge EC2 instances with robust compute, processing and networking capability. B. Offload the SSL/TLS from running locally to AWS CloudHSM. C. Create an auto-scaling group that scales out as traffic to the web application cluster increases. D. Create a custom AWS CloudWatch metric to monitor the instance resources, by writing a script in the AWS Command Line Interface (AWS CLI).

Correct Answer - B AWS CloudHSM service can take up SSL/TLS processing for the web servers. Layer Security (TLS) are used to confirm the identity of web servers and establish secure HTTPS connections over the Internet. Using CloudHSM for this processing reduces the burden on the organization's web servers and provides extra security by storing the web server's private key in CloudHSM. https://aws.amazon.com/cloudhsm/ Option A. is INCORRECT because opting for EC2 instances with larger resource capacities is not best-practice. During periods of low activity, these resources will be idle therefore not cost-effective. Cloud infrastructure and resources should scale-in and out on-demand, thereby aligning service costs to actual usage. https://aws.amazon.com/ec2/instance-types/ Option C. is INCORRECT because the cause of the degraded web application is not due to increased traffic, therefore the auto-scaling group will not solve the situation since it will respond to the web application traffic metric to the web servers. https://aws.amazon.com/autoscaling/ Option D. is INCORRECT because creating a custom AWS CloudWatch metric to monitor the instance's resources will not help reduce the high workload on the web servers in any way. It will, however, give visual output to the administrator to observe usage patterns. From the scenario, this is already in place. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html

An administrator would like to automate the creation of new AWS accounts for the research and development department of the organization where new workloads need to be spun-up promptly and categorized into groups. How can this be achieved efficiently? A. Use of AWS CloudFormation would be sufficient B. Use of AWS Organizations C. Using the AWS API to programmatically create each account via command line interface D. AWS Identity Access Management (IAM)

Correct Answer - B AWS Organizations allows the user to automate the creation of new AWS accounts when they need to quickly launch new workloads. The administrator can add these new accounts to user-defined groups in an organization for easy categorization. For example, you can create separate groups to categorize development and production accounts, and then apply a Service Control Policy (SCP) to the production group allowing only access to AWS services required by production workloads. https://aws.amazon.com/organizations/ Option A. is INCORRECT because AWS CloudFormation does not aide in the automated AWS account creation. AWS CloudFormation provides a common language for the administrator to describe and provision all the infrastructure resources in their cloud environment. CloudFormation allows the administrator to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for applications across all regions and accounts. This file serves as a single source of truth for your cloud environment. https://aws.amazon.com/cloudformation/ Option C. is INCORRECT because using AWS API to programmatically create each account via command-line interface is feasible but not efficient. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. https://aws.amazon.com/cli/ Option D. is INCORRECT because using AWS Identity Access Management (IAM) to fulfill the task is inefficient and tedious. AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users. https://aws.amazon.com/iam/

A startup company for social media apps would like to grant freelance developers temporary access to its Lambda functions setup on AWS. These developers would be signing-in via Facebook authentication. Which service is most appropriate to use in securely granting access? A. Create user credentials using Identity Access Management, IAM B. Use Amazon Cognito for web-identity federation C. Create temporary access roles using IAM D. Use a third-party Web ID, federated access provider

Correct Answer - B Amazon Cognito web identity federation service acts as a broker that allows successfully authenticated users access to AWS resources. After successful authentication on platforms such as Facebook, LinkedIn or Google - users are awarded temporary authentication code from Amazon Cognito thereby gaining temporary access. https://aws.amazon.com/cognito/ Option A. is INCORRECT because the access required is temporary and from a social media sign-in not directly onto the AWS environment. Identity Access Management (IAM) user will be granted access directly using AWS specified credentials. Option C. in INCORRECT because the IAM user credentials will not authenticate on Facebook, they are confined to logging onto the AWS environment in this instance. Option D. is INCORRECT because there is no need to take on third-party Web ID, federated access providers since Amazon has the Cognito service to perform that function.

Which AWS service can be deployed to enhance read performance for applications while reading data from NoSQLdatabase? A. Amazon Route 53 B. Amazon DynamoDB Accelerator C. Amazon CloudFront D. AWS Greengrass

Correct Answer - B Amazon DynamoDB Accelerator (DAX) is a caching service for DynamoDB which can be deployed in VPC in a region where DynamoDB is deployed. For read-heavy applications, DAX can be deployed to increase throughput by providing in-memory caching Option A is incorrect because Amazon Route 53 is an AWS DNS service and cannot improve the performance of DynamoDB. Option C is incorrect because Amazon CloudFront is a global content delivery network that cannot be applied to a DynamoDB table. Option D is incorrect because AWS Greengrass is data caching software for connected devices. For more information on caching solutions with AWS, refer to the following URL: https://aws.amazon.com/caching/aws-caching/

A building access security system is generating inaccurate logs because users often share access tags when entering the building. How can this be solved effectively? A. Encourage users not to share tags by introducing violation penalties B. Use Amazon Rekognition C. Implement Amazon SageMaker D. Implement Amazon Transcribe

Correct Answer - B Amazon Rekognition enables the uptake of imagery and video for analysis in applications. By uploading imagery or video footage to the Rekognition API, the service engine would then identify and distinguish facial features, text, objects and activities. This service will meet the requirements of the scenario as an access control solution. https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html Option A. is INCORRECT because though the method could deter users from sharing tags, it could still happen. It is not a n effective way of managing user access. Option C. is INCORRECT because AWS SageMaker is a fully managed machine learning service. Developers build and train machine learning models, then deploy them into a live hosted environments. This services will not meet the requirements of the scenario. Option D. is INCORRECT because AWS Transcribe uses machine learning technologies to convert audio files, spoken words, into plain text. The service will not resolve the scenario.

In Cost Optimization, what is referred to as EC2 Right Sizing? A. It is a cost-effective solution to determine the appropriate Amazon EC2 resources such as memory, processor type and storage when provisioning an instance type. B. It is a cost-saving solution that analyses data over a period of time to determine and recommend the type of Amazon EC2 instances appropriate for your workload. C. It is the scaling down or scaling up of Amazon EC2 instances and instance types to meet workload demand by maintaining only the threshold resources. D. It is a cost-saving solution that outlines the recommendations of best practice in four aspects namely cost optimization, performance, fault-tolerance and service limits.

Correct Answer - B Cost Optimization: EC2 Right Sizing utilizes managed services to execute right-sizing analysis to provide detailed recommendations for more cost-saving builds and implementations of Amazon EC2 instances. https://aws amazon com/solutions/cost optimization ec2 right sizing/ Option A. is INCORRECT because when provisioning a new Amazon EC2 instance, the user is presented with instance types to choose from, these have varying capacities depending on use case. Option C. is INCORRECT because it describes the mechanism of Auto Scaling and not Amazon EC2 Right Sizing Option D. is INCORRECT because it describes the function of AWS Trusted Advisor which outlines the recommendations of best practice in five not four aspects. Cost optimization, security, performance, fault-tolerance and service limits.

One of a blogger's articles has gone viral which has resulted in a lot of traffic to their blog. This excessive amount of traffic has in turn caused poor browsing experience for some readers. How can normal service to the blog be restored? A. Set up a Web Application Firewall (WAF) that will allow legitimate traffic and deny maliciously generated traffic. B. Set up Read replicas on the backend RDS instance where the article resides. C. Upgrade the backend RDS instance to a non-relational database. D. Configure Multi-AZ to enhance the performance of the backend RDS instances running the blog.

Correct Answer - B Read replicas will enhance the database performance and durability by allowing for automated distribution of load amongst several database instances with the exact copy of the parent database. In this scenario the web server will read the target article from several RDS instances in the read replica cluster, thereby getting good response times, consequently imp roving browsing experience. https://aws.amazon.com/rds/details/read-replicas/ Option A. is INCORRECT because the increase in traffic is not due to malicious synthetic traffic, therefore setting up a Web Application Firewall (WAF) would not meet the requirements of the scenario. Option C. is INCORRECT because upgrading the RDS instance to non-relation instance will still result in a high amount of read requests from the web server, therefore will not meet the needs of the scenario. Option D. is INCORRECT because Multi-AZ will not enhance the performance and durability of the RDS instance in the scenario. It would ensure that the RDS instance remains reachable even when the AWS Availability Zone it was initially provisioned in is down or unreachable https://aws.amazon.com/rds/details/multi-az/

Which of the following routing policies can be used to provide the best performance to global users accessing a static website deployed on Amazon S3 buckets at multiple regions? A. Use Route 53 weighted routing policy. B. Use Route 53 latency routing policy. C. Use Route 53 Geoproximity routing policy.

Correct Answer - B Route 53 latency routing policy can be used to provide the least latency when resources are deployed in multiple regions. This policy routes users' requests to the nearest resource based upon latency. Option A is incorrect as Route 53 weighted routing policy is used to distribute requests between multiple resources based upon weight of each. Option C is incorrect as Route 53 Geoproximity routing policy can be used to route traffic based upon the location of the resource. Option D is incorrect as Route 53 Geolocation routing policy can be used to route traffic based upon user location. For more information on Amazon Route 53 routing policy, refer to the following URL: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

"S3 Intelligent-Tiering" object storage class delivers automatic cost savings by moving data between which of the two access tiers? A. Standard access and Frequent access B. Frequent access and Infrequent access C. Standard access and Infrequent access D. Standard access and One Zone-Infrequent access

Correct Answer - B S3 Intelligent-Tiering stores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. The transitioning is based on the access patterns of the objects. https://aws.amazon.com/about-aws/whats-new/2018/11/s3-intelligent-tiering/ Option A. is incorrect because Amazon S3 intelligent tiering storage class does not transition objects into the Standard storage class Option C. is incorrect because the Standard storage class though similar to Frequent access, is not the same. Option D. is incorrect because Amazon S3 intelligent tiering storage class does not transition objects into the Standard and One Zone-Infrequent access classes.

In a multi-node deployment architecture of Amazon Redshift data warehouse, what is the role of a leader node? A. To access compressed data from the underlying columns B. To receive queries and manage client connections C. Primarily act as an in-memory buffer area to improve operational efficiency D. To store data and perform computations and queries

Correct Answer - B There are two conventional architectures in which to deploy the Amazon Redshift data warehouse, as a single node which handles both client session and computations. The second as a multi node deployment. The latter consists of a leader node which receives queries and manages client connections to the database warehouse, whilst one or more compute nodes will store data and perform computations. https://docs.aws.amazon.com/redshift/latest/dg/c_high_level_system_architecture.html Option A is INCORRECT because the leader node does not fetch data from underlying tables, it is the function of the compute nodes Option C is INCORRECT because an in-memory buffer area is a caching service which is not the function of the leader node. Option D is INCORRECT because writing to file and performing computations and queries is the function of the compute nodes C. Primarily act as an in-memory buffer area to improve operational efficiency D. To store data and perform computations and queries

During a live sports event in a remote location, local photographers are required to promptly upload images into an Amazon S3 bucket for processing by the editorial team. How can this process be optimized? A. Using Cross Region replication B. Using S3 Transfer Acceleration C. Using S3 Standard as the object storage class D. Using an FTP server with a web interface

Correct Answer - B Using S3 Transfer Acceleration will allow the photographers to upload the images to a distinct URL, allowing them to upload images to the nearest Edge Location, then the file will be transferred in a fast, secure via the Amazon backbone network to the S3 bucket. https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html Option A. is incorrect because Cross Region replication is not relevant in this scenario as there is no immediate need to copy (replicate) S3 bucket contents. Option C. is incorrect because using S3 Standard as the object storage class will not optimize the process of uploading images as outlined in the scenario. Option D. is incorrect because using an FTP server will not optimize the process of uploading the files to the S3 bucket.

Which TWO statements best describe the AWS Personal Health Dashboard. A. A concise representation of the general status of AWS services B. User-specific view on the availability and performance of AWS services underlying their AWS resources. C. A service that prompts the user with alerts and notifications on AWS scheduled activities, pending issues, and planned changes. D. A minute-by-minute update of system outages and service errors on the AWS global infrastructure E. A rolling log of all service interruptions across the AWS network, records of incidences persistent for a year

Correct Answer - B, C The Personal Health Dashboard is a tool that shows the status of AWS services that are running user-specific resources. It is a graphical representation that sends alerts, notifications of any personal pending issues, planned changes and scheduled activities. https://aws.amazon.com/premiumsupport/technology/personal-health-dashboard/ Option A. is INCORRECT because it describes a general overview of the Service Health Dashboard Option D. is INCORRECT because it describes the Service Health Dashboard Option E. is INCORRECT because it describes the Status History of the Service Health Dashboard

A start-up organization would like to instantaneously deploy a complex web and mobile application development environment, complete with the necessary resources and peripheral assets. How can this be achieved efficiently? A. By putting together the necessary components from AWS services, starting with EC2 instances. B. Creating AWS Lambda functions that will be triggered by single-button click to call the appropriate API of the respective resources and peripheral assets needed. C. Using AWS Quick Starts to identify and provision the appropriate AWS CloudFormation templates D. Making use of the AWS Serverless Application Repository to identify and deploy the resources needed for a web and mobile application development environment.

Correct Answer - C AWS CloudFormation can be used in conjunction with AWS Quick Starts templates, which are a repository of AWS CloudFormation templates designed by expert architects. These can include third-party resources and peripheral assets tailor-made for a single-button push deployment of specific environments. https://aws.amazon.com/quickstart/?quickstart-all.sort-by=item.additionalFields.updateDate&quickstart-all.sort-order=desc Option A. is INCORRECT because it is cumbersome and not efficient to put together the AWS services and resources necessary to deploy a complex web and mobile application development environment. Option B. is INCORRECT because it is tedious and not efficient to create AWS Lambda function for each of the required components. Option D. is INCORRECT because AWS Serverless Application Repository is primarily used by developers and enterprises to search, look-up, publish and deploy serverless applications on the cloud. https://docs.aws.amazon.com/serverlessrepo/latest/devguide/what-is-serverlessrepo.html

A developer would like to automate the installation, updating of a set of applications on a series of EC2 instances and on-premise servers. Which is the most appropriate service to use to achieve this requirement? A. AWS CodeBuild B. AWS CodeCommit C. AWS CodeDeploy D. AWS CloudFormation

Correct Answer - C AWS CodeDeploy is a deployment service that allows developers to automate the installation of applications to hosts, Amazon EC2 instances, Amazon ECS instances, serverless Lambda functions or even on-premises servers. AWS CodeDeploy can enable the update of those applications. https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html Option A. is INCORRECT because AWS CodeBuild is a fully managed service that primarily compiles source code and runs unit tests with the output being artifacts that will be ready for deployment. https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html Option B. is INCORRECT because AWS CodeCommit service primarily serves the function of controlling software build versions as well as a private storage for software development assets such as binary files, source code and related documentation. https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html Option D. is INCORRECT because AWS CloudFormation will not be able to run deployments of applications onto on-premises infrastructure. Furthermore, AWS CloudFormation automates the deployment of AWS resources but not applications and code onto hosts.

Which of the following services can be used to automate software deployments on a large number of Amazon EC2instance and on-premise servers? A. AWS CodePipeline B. AWS CloudFormation C. AWS CodeDeploy D. AWS Config

Correct Answer - C AWS CodeDeploy is a managed service that automates software deployment on a large scale to EC2 instances and on-premise servers. Option A is incorrect as AWS CodePipeline is a managed service for automation of delivery pipeline for application updates. Option B is incorrect as AWS CloudFormation is used to automate infrastructure provisioning & updates. Option D is incorrect as AWS Config is used to audit configurations of AWS resources. https://aws.amazon.com/codedeploy/features/?nc=sn&loc=2

Which AWS service can be used to detect & analyze performance issues related to AWS Lambda applications? A. AWS CloudTrail B. Amazon CloudWatch C. AWS X-Ray D. AWS Config

Correct Answer - C AWS X-Ray can be used to detect performance issues for AWS Lambda applications. AWS Lambda sends traces to X-Ray which is further analyzed to generate a performance report. Option A is incorrect because AWS CloudTrail will capture API calls made by AWS Lambda. Option B is incorrect because Amazon CloudWatch will track the number of requests, execution time request & error generated, but it won't help in analyzing end-to-end application performance. Option D is incorrect because AWS Config will not help to detect performance issues with AWS Lambda. AWS Config can audit configuration of AWS resource. For more information on debugging AWS Lambda application performance, refer to the following URL: https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html

An organization utilizes a software suite, that consists of a multitude of underlying microservices hosted on the cloud. The application is frequently giving runtime errors. Which service will help in the troubleshooting process? A. AWS CloudTrail B. AWS CloudWatch C. AWS X-Ray D. Amazon Elasticsearch Service

Correct Answer - C AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, developers can understand how the application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through an application, and shows a map of an application's underlying components. https://aws.amazon.com/xray/ Option A. is INCORRECT because AWS CloudTrail primarily records user or API activity, 'who has done what'. It logs, continuously monitors, and retains account activity related to actions across AWS infrastructure. CloudTrail provides event his tory in the AWS account activity but NOT that of the interaction of software microservices within a suite. https://aws.amazon.com/cloudtrail/ Option B. is INCORRECT because AWS CloudWatch does the primary function of monitoring and NOT debugging. It collates data and actionable insights to monitor applications, responds to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. The service, however, does neither debugs nor logs errors that occur amongst software microservices within a suite. https://aws.amazon.com/cloudwatch/ Option D. is INCORRECT because Amazon Elasticsearch is a fully managed service that enables the secure ingest of data from any source so that it can be searched, analyzed, and visualized it in real time. It does not allow for debugging or error detection. https://aws.amazon.com/elasticsearch-service/

A radio station compiles a list of the most popular songs each year. The songs are frequently fetched within 180 days. After that, the users will have a default retrieval time of 12 hours for downloading the files. The files should be stored for over 10years. Which is the most cost-effective object storage after 180 days? A. Amazon S3 Glacier B. Amazon S3 One Zone - Infrequently Accessed C. Amazon S3 Glacier Deep Archive D. Amazon S3 Standard - Infrequently Accessed

Correct Answer - C Amazon S3 Glacier Deep Archive is the most cost-effective object storage to implement because the information will be rarely accessed and when it is accessed, its retrieval period will not be instant. Option A is incorrect because the information might not be referred to again after it was created. Amazon S3 Glacier is appropriate to a certain degree but not the most cost-effective option. Option B is incorrect because Amazon S3 One Zone - Infrequently Accessed is suitable for information that warrants a short retrieval time. In this scenario, a short retrieval time is not critical. Option D is incorrect because Amazon S3 Standard - Infrequently Accessed is not a cost-effective option since the list of songs will only be relevant once and then rarely accessed thereafter. For more information: https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier_Deep_Archive

A radio station compiles a list of the most popular songs each year and will seldom refer to the information thereafter. Listeners can get access to this information up to 24 hours after request. Which is the most cost-effective object storage for this information? A. Amazon S3 Glacier B. Amazon S3 One Zone - Infrequently Accessed C. Amazon S3 Glacier Deep Archive D. Amazon S3 Standard - Infrequently Accessed

Correct Answer - C Amazon S3 Glacier Deep Archive is the most cost-effective object storage to implement because the information will be rarely accessed and when it is accessed, its retrieval period will not be instant. https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier_Deep_Archive Option A. is incorrect because the information is relevant once, when it is created and might not be referred to again, Amazon S3 Glacier is appropriate to a certain degree but not the most cost-effective option. Option B. is incorrect because Amazon S3 One Zone - Infrequently Accessed is suitable for information that warrants a short retrieval time. In the scenario, a short retrieval time is not critical. Option D. is incorrect because Amazon S3 Standard - Infrequently Accessed is not a cost-effective option since the list of songs will only be relevant once and then rarely accessed thereafter.

When designing a highly available architecture, what is the difference between vertical scaling (scaling up) and horizontal scaling (scaling out)? A. Scaling up provides for high availability whilst scaling out brings fault-tolerance B. Scaling out is not cost-effective compared to scaling up C. Scaling up adds more resources to an instance, scaling out adds more instances D. Autoscaling groups require scaling up whilst launch configurations use scaling out

Correct Answer - C In high availability architectures, Autoscaling is used to give elasticity to the design where horizontal scaling (scaling out) uses Autoscaling groups to increase processing capacity in response to changes in preset threshold parameters. It could involve adding more EC2 instances of a web server. Vertical scaling (scaling up), which can create a single point of failure, involves adding more resources to a particular instance to meet demand. https://docs.aws.amazon.com/autoscaling/plans/userguide/what-is-aws-auto-scaling.html Option A. is INCORRECT because scaling up does not provide high availability, adding more resources to one instance is often not a best-practice in architecture design. Option B. is INCORRECT because scaling out is actually cost-effective since it involves only adding more resources in response to demand and reducing resources (scaling down) when demand is low Option D. is INCORRECT because all Autoscaling groups require a launch configuration as the basis of what resources would be provisioned or deprovisioned to meet predefined parameters.

A professional educational institution maintains a dedicated web server and database cluster that hosts an exam results portal for modules undertaken by its students. The resource is idle for most of the learning cycle and becomes excessively busy when exam results are released. How can this architecture be improved to be cost-efficient? A. Configure AWS elastic load-balancing between the webserver and database cluster B. Configure RDS multi-availability zone for performance optimization C. Configure serverless architecture leveraging AWS Lambda functions D. Migrate the web servers onto Amazon EC2 Spot Instances

Correct Answer - C Leveraging AWS Lambda functions will remove the need to run a dedicated web server for the organization. During periods of high requests to the database cluster, AWS lambda backend infrastructure will automatically scale out resources to adequately meet the demand. AWS Lambda provides a platform to run code without provisioning or managing any servers. The organization pays only for the compute time they consume - there is no charge when your code is not running. https://aws.amazon.com/lambda/ Option A. INCORRECT because the premise of the scenario is about cost-efficiency more than load and server responsiveness. Load-balancing would manage the traffic amongst the database clusters but would not relieve the organization of maintaining a dedicated web server which only works occasionally. https://aws.amazon.com/elasticloadbalancing/ Option B. INCORRECT because RDS multi-availability zone does not optimize the setup, rather it allows for disaster recovery, enhanced availability and durability. The scenarios requires a solution that reduces the cost of maintaining the organization's infrastructure and run it efficiently. https://aws.amazon.com/rds/details/multi-az/ Option D. INCORRECT because migrating to Amazon EC2 Spot Instances will negatively affect the operation of the portal during periods of high traffic. Instances could be terminated mid-transaction which would have adverse effects on the overall user experience. This would not be a cost-effective solution. Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Spot Instances can reclaim the capacity back with two-minutes of notice. https://aws.amazon.com/ec2/spot/

A new department has recently joined the organization and the administrator needs to compose access permissions for the group of users. Given that they have varying roles and access needs, what is the best-practice approach when granting access? A. After gathering information on their access needs, the administrator should allow every user to access the most common resources and privileges on the system. B. The administrator should grant all users the same permissions and then grant more upon request. C. The administrator should grant all users the least privilege and add more privileges to only to those who need it. D. Users should have no access and be granted temporary access on the occasions that they need to execute a task.

Correct Answer - C The best-practice for AWS Identity Access Management (IAM) is to grant the least amount of permissions on the system, enough to only execute the required tasks of the user's role. Additional permissions can be granted per user in accordance to the tasks they wish to perform on the system. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege Option A. is incorrect because granting users access to the most common resources presents security vulnerabilities, especially from those who have access to resources they do not need. Option B. is incorrect because granting users the same privileges on the system means other users might get access to resources they do not need to carry out their job functions. This presents a security risk. Option D. is incorrect because the users are part of the organization, it will be cumbersome for the administrator to constantly create temporal access passes for internal staff.

A department in an organization has a stipulated monthly expenditure limit on their AWS account and is anxious about exceeding it. How best can they allay this concern? A. Regularly review their Billing and Cost management dashboard during the course of the month in the management console. B. Under Billing Preferences > Cost Management Preferences, they should tick the Receive Free Tier Usage Alerts checkbox. C. In AWS CloudWatch they ought to create an alarm that triggers each time the services bill surpasses the limit. D. In AWS Budgets, creating an email alert based on the budget parameters would suffice.

Correct Answer - D AWS Budgets provides a useful feature of setting custom budgets that prompt the user when their costs or usage are forecasted to exceed. The forecast aspect gives a buffer period in advance when alerting the user. Budgets can be tracked at the monthly, quarterly, or yearly level, and have customizable start and end dates. Alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic. https://aws.amazon.com/aws-cost-management/aws-budgets/ Option A is INCORRECT because the regular review will not stop nor alert the department if their service bill were to exceed their stipulated budget. https://docs.aws.amazon.com/account-billing/index.html Option B is INCORRECT because selecting the Receive Free Tier Usage Alerts checkbox would notify the department each time their service bills goes out of the free-tier range only and not when it approaches the limit. https://aws.amazon.com/about-aws/whats-new/2017/12/aws-free-tier-usage-alerts-automatically-notify-you-when-you-areforecasted- to-exceed-your-aws-service-usage-limits/ Option C is INCORRECT because configuring an alarm in AWS CloudWatch that triggers after exceeding the bill will not meet the requirements of staying within the desired budget. The alarm triggers when the account billing exceeds the threshold specified. It triggers only when actual billing exceeds the threshold. It does not use projections based on the usage so far in the month. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html

A cloud solutions architect needs to execute urgent mission-critical tasks on the AWS Management console, but has left their Windows-based machine at home. What secure option can be used to administer these tasks on the cloud infrastructure given that only non-graphical user interface (non-GUI), Linux-based machines are readily available? A. Share the AWS Management console credentials with the person at home over the phone, so they can execute on the cloud solutions architect behalf B. Use third-party remote desktop software to access the Windows-based machine at home from the non-GUI workstations and administer the necessary tasks C. Use Secure Shell (SSH) to securely connect to the Windows-based machine from one of the non-GUI Linux-based machines then log onto the AWS Management console D. Install and run AWS CLI on one of the non-GUI Linux-based machines, in a shell environment such as bash, the cloud solutions architect can access ALL services just as they could from a Windows-based machine.

Correct Answer - D AWS Command Line Interface (AWS CLI) is an open source tool that enables access and interaction with AWS services using commands in the command-line shell. With minimal configuration the cloud solutions architect would start using the functionality equivalent to that provided by the browser-based AWS Management Console from the command prompt in a terminal program such as bash. https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html Option A. is INCORRECT because sharing AWS Management console credentials is bad-practice and poses a high security risk. https://aws.amazon.com/iam/details/managing-user-credentials/ Option B. is INCORRECT accessing the AWS Management console via third-party remote desktop software is insecure since the remote machine can be compromised. Option C. is INCORRECT because it is rather cumbersome in comparison, though secure the option is oblivious of the direct access method of AWS CLI

An organization has a persistently high amount of throughput, it requires connectivity with no jitter and very low latency between its on-premise infrastructure and its AWS cloud build to support live streaming and real-time services. What is the MOST appropriate solution to meet this requirement? A. AWS Data Streams B. AWS Kinesis C. Kinesis Data Firehose D AWS Direct Connect

Correct Answer - D AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from the organization's premises to AWS. The service provides a dedicated network connection with one of the AWS Direct Connect locations making it possible to guaranteed high bandwidth and very low latency connectivity. https://aws.amazon.com/directconnect/ Option A. is INCORRECT because the scenarios requires a connectivity option whilst Amazon Kinesis Data Streams (KDS), is a massively scalable and durable real-time data streaming service. It does not, however, guarantee the quality of connectivity between the organizations on-premise infrastructure and the AWS cloud build. The data KDS collects is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more. https://aws.amazon.com/kinesis/data-streams/ Option B. is INCORRECT because the organization requires a connectivity solution and not an application service. Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data in order to get timely insights and react quickly to new information. https://aws.amazon.com/kinesis/ Option C. is INCORRECT because Amazon Kinesis Data Firehose is used to load streaming data into various destinations like data lakes, data stores and analytics tools. The service, however, does not guarantee link quality between the organization's on-premise infrastructure and that in the AWS cloud. https://aws.amazon.com/kinesis/data-firehose/

Select which use case scenario best suits the implementation of an Amazon RDS database instance over a NoSQL/nonrelational database. A. Where datasets are constantly evolving and cannot be confined to a static data schema B. Where vertical scaling of the database's resources is not permissible and is seldom necessary. C. In an organization whose datasets are dynamic and document-based D. In an organization where only a finite number of processes query the database in predictable and well-structured Schemas..

Correct Answer - D Amazon Relational databases service (RDS) is best suited in scenarios where the dataset and forms are consistent such that their data schema is persistently valid. It is best to deploy in an environment where the load can be anticipated and is somewhat finite. Amazon RDS engines include Amazon Aurora, MariaDB, PostgreSQL https://aws.amazon.com/rds/ Option A. is INCORRECT because Amazon RDS engines are inappropriate in a scenario where datasets are constantly evolving and the data schema is flexible. NoSQL/non-relational databases fit this use case. Option B. is INCORRECT because Amazon Relational Database service engines will scale up with the increase in load and is often necessary as the traffic patterns to the database increases. Option C. is INCORRECT because in a scenario where the datasets are dynamic and document-based, the use of JSON an d not SQL is appropriate, therefor non-relationals/NoSQL database engines such as Amazon DynamoDB. https://aws.amazon.com/nosql/

Which use case would warrant the cost-effective implementation of Amazon EC2 Reserved Instances with Spot Instances in the same build? A. A build that has sudden unpredictable workload spikes but for a short time horizon B. One in which there is a predictable resource demand over a long time horizon C. One that has a predictable workload over a long time horizon with prolonged and unpredictable spikes. D. One that has a constantly predictable workload with brief unpredictable spikes

Correct Answer - D In use cases that are characterized by a constantly predictable workload with brief unpredictable spikes, Amazon EC2 Reserved Instances would be the most cost-effective to meet the constantly predictable workload. Whilst Spot Instances in an auto scaling group would be suffice to meet the demands of the build. https://aws.amazon.com/solutions/case-studies/mercadolibre-ec2/ Option A. is INCORRECT because this use case would be cost-effectively serviced by Amazon EC2 Reserved Instances with on-demand instances in an auto scaling group to meet the resource demands of the spike. Option B. is INCORRECT because this use case would be cost-effectively serviced by Amazon EC2 Reserved Instances alone. Option C. is INCORRECT because this use case would be cost-effectively serviced by Amazon EC2 Reserved Instances with on-demand instances in an auto scaling group to meet the resource demands of the spike.

Which of the following is a prerequisite for using AWS OpsWorks to manage applications on servers at the customer datacentres (on-premises compute servers)? A. Servers should be running on either Linux or Windows OS with connectivity to AWS public endpoints. B. Servers should be running Linux OS with connectivity to AWS private endpoints. C. Servers should be running Windows OS with connectivity to AWS private endpoints. D. Servers should be running Linux OS with connectivity to AWS public endpoints.

Correct Answer - D It is given in the AWS documentation that: You can use AWS OpsWorks Stacks to configure and manage both Linux and Windows EC2 instances. But there is no need for EC2 instance, the question focuses on compute servers on the data centers means outside of AWS. A VPC endpoint enables private connections between your VPC and supported AWS services and VPC endpoint services powered by AWS PrivateLink. VPC endpoints are virtual devices. To use AWS OpsWorks for servers in customer data centers, the servers should have Linux operating systems with an OpsWorks Stacks agent installed and connectivity to AWS Public endpoints. Using AWS OpsWorks Stacks to create Amazon EC2 instances, you can also register instances with a Linux stack that were created outside of AWS OpsWorks Stacks. However, they must be running one of the supported Linux distributions. You cannot register Amazon EC2 or on-premises Windows instances. Option A is incorrect because servers deployed at customer data centers only supports Linux OS, not both. Option B is incorrect because servers deployed at customer data centers should have connectivity to AWS public endpoints instead of private endpoints. Options C is incorrect because on-premise servers should have Linux OS instead of Windows OS. For more information on AWS OpsWorks, refer to the following URLs: https://aws.amazon.com/opsworks/stacks/features/ https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html

Which of the following is an optional Security layer attached to a subnet within a VPC for controlling traffic in & out of the VPC? A. VPC Flow Logs B. Web Application Firewall C. Security Group D. Network ACL

Correct Answer - D Network ACL can be additionally configured on subnet level to control traffic in & out of the VPC. Option A is incorrect. VPC Flow Logs will capture information about IP traffic in & out of VPC. This will not be used for controlling purposes. Option B is incorrect. Web Application Firewall (WAF) can be configured to protect web applications from common security threats. It can be deployed on devices such as Amazon CloudFront, Application Load Balancer and Amazon API Gateway. Option C is incorrect. Security Groups are attached at instance level & not at the subnet level. For more information on security within VPC, refer to the following URL: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html#VPC_Security_Comparison

A file-sharing service uses Amazon S3 to store files uploaded by users. Files are accessed with random frequency, popular ones are downloaded every day whilst others not so often and some rarely. What is the most cost-effective Amazon S3 object storage class to implement? A. Amazon S3 Standard B. Amazon S3 Glacier C. Amazon S3 One Zone-Infrequently Accessed D. Amazon S3 Intelligent-Tiering

Correct Answer - D S3 Intelligent-Tiering is a new Amazon S3 storage class designed for customers who want to optimize storage costs automatically when data access patterns change, without performance impact or operational overhead. S3 Intelligent-Tiering is the first cloud object storage class that delivers automatic cost savings by moving data between two access tiers — frequent access and infrequent access — when access patterns change, and is ideal for data with unknown or changing access patterns. S3 Intelligent-Tiering stores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. There are no retrieval fees in S3 Intelligent-Tiering. If an object in the infrequent access tier is accessed later, it is automatically moved back to the frequent access tier. No additional tiering fees apply when objects are move d between access tiers within the S3 Intelligent-Tiering storage class. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and offers the same low latency and high throughput performance of S3 Standard. https://aws.amazon.com/about-aws/whats-new/2018/11/s3-intelligent-tiering/ Option A. is incorrect because Amazon S3 Standard would be an inefficient class for storing those objects that will be accessed rarely. Option B. is incorrect because storing objects that are frequently accessed in Amazon S3 Glacier would present operational bottlenecks since these objects would not be available instantly. https://aws.amazon.com/s3/storage-classes/ Option C. is incorrect because storing those objects that are rarely accessed and those that would be accessed frequently in Amazon S3 One Zone-Infrequently Accessed would be inefficient.

A web administrator maintains several public and private web-based resources for an organization. Which service can they use to keep track of the expiry dates of SSL/TLS certificates as well as updating and renewal? A. AWS Data Lifecycle Manager B. AWS License Manager C. AWS Firewall Manager D. AWS Certificate Manager

Correct Answer - D The AWS Certificate Manager allows the web administrator to maintain one or several SSL/TLS certificates, both private and public certificates, including their update and renewal such that the administrator does not worry about imminent expiry of certificates. https://aws.amazon.com/certificate-manager/ Option A. is INCORRECT because an AWS Lifecycle Manager serves the purpose of creating lifecycle policies for specified resources in order to automate operations. https://docs.aws.amazon.com/dlm/?id=docs_gateway Option B. is INCORRECT because AWS License Manager serves the purpose of differentiating, maintaining third-party software provisioning vendor licenses as well as decreases the risk of license expirations and the penalties. https://docs.aws.amazon.com/license-manager/?id=docs_gateway Option C. is INCORRECT because AWS Firewall Manager aids in the administration of Web Application Firewall (WAF), by presenting a centralized point of setting firewall rules across different web resources. https://docs.aws.amazon.com/firewall-manager/?id=docs_gateway

Which of the following is a customer responsibility under AWS Shared Responsibility Model? A. Patching of host OS deployed on Amazon S3. B. Logical Access controls for underlying infrastructure. C. Physical security of the facilities. D. Patching of guest OS deployed on Amazon EC2 instance.

Correct Answer - D Under the AWS shared responsibility model, AWS takes care of infrastructure configuration & management while customers must take care of the resources they launched within AWS. Option A is incorrect. Amazon S3 is part of the infrastructure layer & Patching of host OS/Configuration for Amazon S3 is responsibility of AWS. Option B is incorrect. AWS has the responsibility for the Logical Access controls for the underlying infrastructure. Option C is incorrect. Physical Security of the facilities is AWS responsibility. For more information on Shared responsibility model, refer to the following URL: https://aws.amazon.com/compliance/shared-responsibility-model

Which of the following solutions can improve the performance of a web server that receives high levels of traffic, running an online banking portal? (Select TWO). A. Web server to be configured with a private IP address and hosted behind a NAT gateway B. Web server to be configured with a public IP address and hosted behind a WAF C For an added layer of security host the web server on a Content Distribution Network (CDN) D. Use of SSL acceleration E. Relieve computational overhead on the web server by offloading HTTPS session processes to hardware security modules in an AWS CloudHSM cluster.

Correct Answer - D, E During the SSL/TLS session transactions between the browsers and web server, there is a heightened demand for computational capacity thus offloading these processes to AWS CloudHSM cluster would greatly improve the performance of the web server. This is referred to as SSL acceleration. https://docs.aws.amazon.com/cloudhsm/latest/userguide/ssl-offload.html Option A. is INCORRECT because configuring the server to a private IP address behind a NAT gateway will not improve the performance of the web server, though it will add a security layer. Option B. is INCORRECT because configuring the server with a public IP address will not add any performance enhancement to the web server. Option C. is INCORRECT because hosting the server on a CDN will not add a layer of security. https://aws.amazon.com/cloudfront/

Which of the following is an accurate statement regarding AWS resource tags? A. All AWS resource tags have a semantic interpretation B. Within a resource tag, every defined key must have a value string C. By default, resource tags are assigned as null, null D. Resource tags can be edited or removed at any time E. Placement group does not support tags

Correct Answer - D, E Resource tags are critical when architecting in the cloud for labelling assets to make it possible to easily administer and manage them. Useful when running queries about billing, auditing and asset lookups for quantities. They are a user-defined key-value pair with variable characters. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html Placement groups does not support tags. Please refer the below link https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html Option A. is INCORRECT because resource tags to do not have a semantic meaning they are plain-text characters whose utility can be derived by the user who defines them. Option B. is INCORRECT because a user can define a key and not specify the character string for its value, leaving it empty but cannot set the value to null Option C. is INCORRECT because resource tags are not assigned by default, the user has to explicitly define them.

A financial company with many resources running on AWS would like a machine learning-driven and proactive security solution that would promptly identify security vulnerabilities, particularly flagging suspicious or abnormal data patterns or activity between AWS services. Which AWS service would best meet this requirement? A. AWS Detective B. AWS Macie C. AWS Shield D. Amazon CloudWatch Anomaly Detection

Correct Answer: A AWS Detective is a persistent machine learning-driven service that automatically collates log data from all AWS resources. This log data is then applied into machine learning algorithms to derive data patterns between AWS services and resources, graph theory and statistical analysis. This information allows the user to proactively visualize their AWS environment from a security standpoint, thereby allowing them to quickly and efficiently conduct security investigations when they occur. Option B is INCORRECT because AWS Macie primarily matches and discovers sensitive data such as personally identifiable information (PII) but does not have the capability to keep track of data behaviors between AWS services to detect anomalies. Therefore the service does not meet the requirement. Option C is INCORRECT because AWS Shield is a Distributed Denial of Service (DDoS) protection service that applies to applications running in the AWS environment. The service does not have machine learning capability to keep track of data behaviors between AWS services. Option D is INCORRECT because Amazon CloudWatch Anomaly Detection is a machine learning feature limited to Amazon CloudWatch metrics. It does not extend to all the AWS services, so it does not meet the requirement. For more information: https://docs.aws.amazon.com/detective/latest/adminguide/what-is-detective.html

After moving their workload to AWS eu-central-1 region, an administrator would like to configure their email server on an Amazon EC2 instance in a private subnet of the VPC which will use Amazon SES. What is the most effective setup to implement? A. Configure a VPC endpoint powered by AWS PrivateLink. B. Ensure that the private subnet has a route to a NAT gateway in a public subnet. C. Configure the email server with the appropriate Amazon SES endpoint for eu-central region, email-smtp.eu-central-1.amazonaws.com. D. Configure the email server to use a service port other than port 25 to avoid Amazon EC2 throttling.

Correct Answer: A As of April 29, 2020, AWS announced the addition of Amazon SES as a service available over VPC endpoint powered by AWS PrivateLink. This makes it possible to configure a VPC endpoint which the email server will reach within the VPC without the need for internet access. This is the most effective setup to implement. Option B is INCORRECT because ensuring that the private subnet has a route to a NAT gateway in a public subnet is feasible but would mean that the traffic would traverse the internet to get to the Amazon SES endpoints. This is not the most effective setup to implement. Option C is INCORRECT because configuring the email server with the appropriate Amazon SES endpoint would not work since Amazon EC2 instance is in a private subnet and does not have internet access to reach it. Option D is INCORRECT because configuring the email server to use a service port other than port 25 is recommended but does not address the requirement of how the email server will reach the Amazon SES endpoint. For more information: https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-ses-now-offers-vpc-endpoint-support-for-smtp-endpoints/

What is a valid difference between AWS Global Accelerator and Amazon CloudFront? Choose TWO responses. A. AWS Global Accelerator uses the Any cast techniques to accelerate latency-sensitive applications Amazon CloudFront uses Unicast. B. Amazon CloudFront makes use of Edge Locations and edge infrastructure, whilst AWS Global Accelerator does not. C. AWS Global Accelerator does not include the content caching capability that Amazon CloudFront does. right D. AWS Global Accelerator is suitable for applications that are non-HTTP/S such as VoIP, MTTQ and gaming where as Amazon CloudFront enhances the performance of HTTP-based content such as dynamic web applications, images and videos. right E. For the resource endpoint, Amazon CloudFront offers static public IP addresses whilst AWS Global Accelerator does not.

Correct Answer: C, D AWS Global Accelerator uses the highly available, high-speed AWS global network and any cast routing techniques to greatly improve the availability and network performance of the customer application. By leveraging Edge Locations and edge infrastructure traffic to and from customer application endpoints ingresses and egresses the AWS global network at geographically closer locations to clients. Amazon CloudFront is a content delivery network (CDN) that improves the performance of cacheable web content, like videos, images, using content caches at Edge Locations. Option A is INCORRECT because Amazon CloudFront does not use Unicast techniques. Instead, it uses a content caching mechanism in delivering enhanced web application performance. Option B is INCORRECT because both AWS Global Accelerator and Amazon CloudFront service make use of Edge Locations and edge infrastructure on the AWS Global network. Option E is INCORRECT because Global Accelerator provides static public IP addresses for the customer resource endpoints, whilst the fully-qualified domain name of the Amazon CloudFront distribution can resolve to dynamic public IP addresses. https://aws.amazon.com/global-accelerator/faqs/ https://youtu.be/GAxrPQ3ycsQ https://youtu.be/AT-nHW3_SVI

I have a client who is moving their on premise workloads to AWS. Since they are very cost conscious, they would like to get first hand information on their expenses they will incur while using AWS services. Which of the following will help them do that? A. AWS Cost Explorer B. AWS Organizations C. AWS Budgets D. AWS Pricing Calculator

Correct Answer: D Option A is incorrect since Cost Explorer helps users to view graph displays of cost of your billing data and analyze them & get a forecast for likely spends for the next 12 months. The scenario is more to do with clients getting a cost estimate of different AWS services before they move to AWS cloud Option B is incorrect since AWS Organizations allows clients to consolidate multiple AWS accounts that they may own into an Organization that they can centrally control many parameters like Account billing, IAM permissions etc. AWS Organizations provides a feature called consolidated billing that provides a single bill for multiple accounts Option C is incorrect since AWS Budgets helps clients to plan their service usage, service costs and get informed alerts when the costs reach a certain threshold Option D is CORRECT. Through AWS pricing calculator a client can estimate costs that he will incur for various AWS services that he wishes to use. The pricing calculator guides the user through a set of well defined service parameters eg If S3 is planned to be used for a static website then parameters like "Standard storage per month"; "PUT, COPY, LIST,POST" requests to S3 standard could be relevant for determining the cost of using S3 on a monthly basis. For more information: https://docs.aws.amazon.com/pricing-calculator/latest/userguide/what-is-pricing-calculator.html https://youtu.be/JWz4eCczCkQ

Which of the following statements regarding billing, cost optimization and cost management in AWS is accurate? A. When considering migrating to the cloud, the AWS Total Cost of Ownership (TCO) calculator is guaranteed to save up to 80% of the cost of running on-premise infrastructure. B. In AWS Budgets, utilizing Cost and Usage budgets will optimize and reduce the overall spend by 79%. C. The AWS Pricing Calculator will workout a revised bill that can reduce the overall spend by 60% if you commit to along-term usage plan. D. When using Savings Plans, 72% savings can be made on Amazon EC2, AWS Fargate, and AWS Lambda usage.

Correct Answer: D Savings Plans are flexible discount pricing models that offer reduced rates if the customer commits to one year or three-year consistent usage. These are confined to Amazon EC2, AWS Fargate, and AWS Lambda usage. https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html Option A is INCORRECT because the AWS Total Cost of Ownership (TCO) calculator is an estimation tool. It does not guarantee saving up 80% of the cost of running on-premise infrastructure. However, the tool allows the customer to estimate and anticipate their total AWS spend according the their use case. Option B is INCORRECT because in AWS Budgets, utilizing Cost and Usage budgets will give the customer foresight into how much they would like to use and spend on their AWS services. Utilizing this service will not reduce the overall spend by an exact percentage. Therefore this statement is inaccurate. Option C is INCORRECT because the AWS Pricing Calculator does not revise the customer bill. It allows the customer to derive an estimation of the cost of their AWS resources before the costs are incurred. Therefore this statement is inaccurate.

Which of the following is NOT an area of shared controls (Shared between AWS & Customer in different contexts) within the AWS Shared responsibility Model? (Select TWO.) A. Configuration Management B. Service & communication protection C. Patch Management D. IAM User Management E. Training & Awareness

Correct Answers: B and D Shared controls are applicable in both the infrastructure & customer layers but in completely separate contexts. Under shared controls, AWS provides requirements for infrastructure while customers must provide their own control implementation for the AWS services that they use Option A is incorrect since configuration management has shared controls. AWS is responsible for configuring infrastructure devices while the customer is responsible for configuring their guest OS & applications Option B is CORRECT since Services communication may be subject to data zoning & protection within specific security environments. This is primarily the responsibility of the customer & AWS does not play any role in this. This may take the form of configuring NACL's, Security Groups, Data encryption etc... Option C is incorrect since AWS is responsible for detecting & patching flaws within the infrastructure while the customer is responsible for patching their guest OS & applications Option D is CORRECT since IAM and user management refers to security "In" the cloud and are best managed by the customer Option E is incorrect since AWS trains its own employees while customers need to train their own employees. https://aws.amazon.com/compliance/shared-responsibility-model/

Which of the following services can be used to optimize performance for global users to transfer large-sized data objects to a centralized Amazon S3 bucket in us-west-1 region? A. Enable S3 Transfer Acceleration on Amazon S3 bucket. B. Use Amazon CloudFront Put/Post commands C. Use Multipart upload D. Use Amazon ElastiCache

S3 Transfer Acceleration can optimize performance for data transfer between users & objects in Amazon S3 bucket. Transfer acceleration uses CloudFront edge location to provide accelerated data transfer to users. Option B is incorrect as Amazon CloudFront Put/Post commands can be used for small-sized objects but for large-sized data objects, S3 Transfer Acceleration provides better performance. Option C is incorrect as users should use Multipart uploads for all data objects exceeding 100 megabytes. But for better performance, S3 transfer acceleration should be enabled. Option D is incorrect as for global users accessing S3 bucket, S3 Transfer Acceleration is a better choice. For more information on Amazon S3 Transfer Acceleration, refer to the following URLs: https://aws.amazon.com/s3/faqs/#s3ta https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html

Which of the following is a benefit of running an application across two Availability Zones? A. Performance is improved over running in a single Availability Zone. B. It is more secure than running in a single Availability Zone. C. It significantly reduces the total cost of ownership versus running in a single Availability Zone. D. It increases the availability of an application compared to running in a single Availability Zone.

The correct answer is Option D. Each AZ is a set of one or more data centers. By deploying your AWS resources to multiple Availability zones , you are designing with failure in mind. So if one AZ were to go down , the other AZ's would still be up and running and hence your application would be more fault tolerant. For more information on AWS Regions and AZ's, please refer to the below URL:http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html A, B and C are incorrect. Spreading an application across availability zones increases redundancy. It does not have a positive impact on performance, security or cost.


Related study sets

GRE MATH - Prepared by Berkeley Students

View Set

Fundamentals of Nursing Chapter 19 Nutrition

View Set

2022 UPDATED Abeka 12th grade Genesis content quiz 1

View Set

ATI Hematology Practice Questions

View Set

Harding, Psych stats 325, chapters 4-6

View Set