AWS Certified Cloud Practitioner - Practice Test #5
An engineering team would like to cost-effectively run hundreds of thousands of batch computing workloads on AWS. As a Cloud Practitioner, which AWS service would you use for this task? Amazon Lightsail AWS Lambda AWS Batch AWS Fargate
Explanation Correct option: AWS Batch AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. You can use AWS Batch to plan, schedule, and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs. Please review the common use-cases for AWS Batch: via - https://aws.amazon.com/batch/ Incorrect options: AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. It can be used to run batch jobs, but has a time limit, and limited runtimes. It is usually used for smaller batch jobs. Amazon Lightsail - Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS. Lightsail plans include everything you need to jumpstart your project - a virtual machine, SSD- based storage, data transfer, DNS management, and a static IP address - for a low, predictable price. It is not used to run batch jobs. AWS Fargate - AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters. You can run batch jobs on Fargate, but it is more expensive than AWS Batch. Reference: https://aws.amazon.com/batch/
A company needs to use a secure online data transfer tool/service that can automate the ongoing transfers from on-premises systems into AWS while providing support for incremental data backups. Which AWS tool/service is an optimal fit for this requirement? AWS Snowmobile AWS DataSync AWS Snowcone AWS Storage Gateway
Explanation Correct option: AWS DataSync AWS DataSync is a secure online data transfer service that simplifies, automates, and accelerates copying terabytes of data to and from AWS storage services. Easily migrate or replicate large data sets without having to build custom solutions or oversee repetitive tasks. DataSync can copy data between Network File System (NFS) shares, or Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems. You can use AWS DataSync for ongoing transfers from on-premises systems into or out of AWS for processing. DataSync can help speed up your critical hybrid cloud storage workflows in industries that need to move active files into AWS quickly. This includes machine learning in life sciences, video production in media and entertainment, and big data analytics in financial services. DataSync provides timely delivery to ensure dependent processes are not delayed. You can specify exclude filters, include filters, or both, to determine which files, folders or objects get transferred each time your task runs. AWS DataSync employs an AWS-designed transfer protocol—decoupled from the storage protocol—to accelerate data movement. The protocol performs optimizations on how, when, and what data is sent over the network. Network optimizations performed by DataSync include incremental transfers, in-line compression, and sparse file detection, as well as in-line data validation and encryption. Data Transfer between on-premises and AWS using DataSync: via - https://aws.amazon.com/datasync/ Incorrect options: AWS Storage Gateway - AWS Storage Gateway is a set of hybrid cloud services that give you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to integrate AWS Cloud storage with existing on-site workloads so they can simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications. AWS Snowmobile - AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. AWS Snowcone - AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices. Weighing in at 4.5 pounds (2.1 kg), AWS Snowcone is equipped with 8 terabytes of usable storage, while AWS Snowcone Solid State Drive (SSD) supports 14 terabytes of usable storage. Both referred to as Snowcone, the device is ruggedized, secure, and purpose-built for use outside of a traditional data center. Its small form factor makes it a perfect fit for tight spaces or where portability is a necessity and network connectivity is unreliable. You can use Snowcone in backpacks on first responders, or for IoT, vehicular, and drone use cases. You can execute compute applications at the edge, and you can ship the device with data to AWS for offline data transfer, or you can transfer data online with AWS DataSync from edge locations. References: https://aws.amazon.com/datasync/ https://aws.amazon.com/datasync/features/
A data science team would like to build Machine Learning models for its projects. Which AWS service can it use? Amazon Comprehend Amazon SageMaker Amazon Connect Amazon Polly
Explanation Correct option: Amazon SageMaker - Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning. Incorrect options: Amazon Polly - You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly's Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. Amazon Comprehend - Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. Natural Language Processing (NLP) is a way for computers to analyze, understand, and derive meaning from textual information in a smart and useful way. By utilizing NLP, you can extract important phrases, sentiment, syntax, key entities such as brand, date, location, person, etc., and the language of the text. Amazon Connect - Reference: https://aws.amazon.com/sagemaker/
Which of the following statements is an AWS best practice when architecting for the Cloud? Servers, not services Close coupling Security comes last Automation
Explanation Correct option: Automation Automation should be implemented to improve both your system's stability and the efficiency of your organization. There are many services to automate application architecture (AWS Elastic Beanstalk, Auto Scaling, AWS Lambda, etc.) to ensure more resiliency, scalability, and performance. Incorrect options: Servers, not services - The correct best practice is: "Services, not servers". AWS recommends to develop, manage, and operate applications, especially at scale, using the broad set of compute, storage, database, analytics, applications, and deployment services offered by AWS to move faster and lower IT costs. Close coupling - The correct best practice is: "Loose coupling". AWS recommends that, as application complexity increases, IT systems should be designed in a way that reduces interdependencies. Therefore, a change or a failure in one component should not cascade to other components. Security comes last - AWS allows you to improve your security in many, more simple ways. Therefore, you should take advantage of this and implement a high level of security. Reference: https://aws.amazon.com/architecture/well-architected/
According to the Shared Responsibility Model, which of the following is a responsibility of the customer? Protecting hardware infrastructure Firewall & networking configuration in EC2 Edge locations security Managing DynamoDB
Explanation Correct option: Firewall & networking configuration in EC2 The customers are responsible for "Security IN the cloud". In includes the configuration of the operating system, network & firewall of applications. Exam Alert: Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via - https://aws.amazon.com/compliance/shared-responsibility-model/ Incorrect options: Managing DynamoDB - DynamoDB is a fully managed service. AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Protecting hardware infrastructure Edge locations security AWS is responsible for "Security OF the cloud". It includes the infrastructure, which is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Which of the following billing timeframes is applied when running a Windows EC2 on-demand instance? Pay per hour Pay per day Pay per second Pay per minute
Explanation Correct option: Pay per second With On-Demand instances you only pay for EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. When running a Windows EC2 on-demand instance, pay per second pricing is applied. Incorrect options: Pay per hour - When running a Windows EC2 on-demand instance, pay per second pricing is applied. Windows based EC2 instances used to follow pay-per-hour pricing earlier. Pay per minute - Pay per minute pricing is not available for Windows EC2 on-demand instances, or any other type of on-demand EC2 instance. Pay per day - Pay per day pricing is not available for Windows EC2 on-demand instances, or any other type of on-demand EC2 instance. Reference: https://aws.amazon.com/ec2/pricing/
Which service/tool will you use to create and provide trusted users with temporary security credentials that can control access to your AWS resources? AWS Web Application Firewall (AWS WAF) Amazon Cognito AWS Single Sign-On (SSO) AWS Security Token Service (AWS STS)
Explanation Correct option: AWS Security Token Service (AWS STS) - AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences: Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permissions to do so. Temporary security credentials are generated by AWS STS. By default, AWS STS is a global service with a single endpoint at https://sts.amazonaws.com. However, you can also choose to make AWS STS API calls to endpoints in any other supported Region. Incorrect options: Amazon Cognito - Amazon Cognito is a higher level of abstraction than STS. Amazon Cognito supports the same identity providers as AWS STS, and also supports unauthenticated (guest) access, and lets you migrate user data when a user signs in. Amazon Cognito also provides API operations for synchronizing user data so that it is preserved as users move between devices. Cognito helps create the user database, which is not possible with STS. AWS Single Sign-On (SSO) - AWS Single Sign-On (SSO) makes it easy to centrally manage access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place. With AWS SSO, you can easily manage access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO configures and maintains all the necessary permissions for your accounts automatically, without requiring any additional setup in the individual accounts. AWS Web Application Firewall (AWS WAF) - AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting. Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
A company would like to create a private, high bandwidth network connection between its on-premises data centers and AWS Cloud. As a Cloud Practitioner, which of the following options would you recommend? VPC Endpoints Site-to-Site VPN VPC Peering Direct Connect
Explanation Correct option: Direct Connect AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. How AWS Direct Connect works: via - https://aws.amazon.com/directconnect/ Incorrect options: Site-to-Site VPN - By default, instances that you launch into an Amazon VPC can't communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. It uses the public internet and is therefore not suited for this use case. VPC Endpoints - A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. It does not connect your on-premises data centers and AWS Cloud. VPC Peering - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is used to connect VPCs together, and not on-premises data centers and AWS Cloud. Reference: https://aws.amazon.com/directconnect/
Which of the following statements is INCORRECT regarding EBS Volumes? EBS Volumes can persist data after their termination EBS Volumes are bound to a specific Availability Zone (AZ) EBS Volumes can be mounted to one instance at a time EBS Volumes can be bound to several Availability Zones (AZs)
Explanation Correct option: EBS Volumes can be bound to several Availability Zones (AZs) An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. When using EBS Volumes, the volume and the instance must be in the same Availability Zone. Incorrect options: EBS Volumes can be mounted to one instance at a time - At the Certified Cloud Practitioner level, EBS Volumes can be mounted to one instance at a time. It is also possible that an EBS Volume is not mounted to an instance. EBS Volumes are bound to a specific Availability Zone (AZ) - As mentioned, when using EBS Volumes, the volume and the instance must be in the same Availability Zone. EBS Volumes can persist data after their termination - Unlike EC2 instance store, an EBS volume is off-instance storage that can persist independently from the life of an instance. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes.html
A developer would like to automate operations on his on-premises environment using Chef and Puppet. Which AWS service can help with this task? AWS Batch AWS CodeDeploy AWS CloudFormation AWS OpsWorks
Explanation Correct option: AWS OpsWorks AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. Incorrect options: AWS CloudFormation - AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. It does not use Chef and Puppet and is more focused on what and how AWS resources are procured. AWS CodeDeploy - AWS CodeDeploy is a service that automates code deployments to any instance, including EC2 instances and instances running on premises. It does not use Chef and Puppet, and does not deal with infrastructure configuration and orchestration. AWS Batch - AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It is not used to automate operations on his on-premises environment using Chef and Puppet. Reference: https://aws.amazon.com/opsworks/
The IT infrastructure at a university is deployed on AWS Cloud and it's experiencing a read-intensive workload. As a Cloud Practitioner, which AWS service would you use to take the load off databases? Amazon EMR Amazon Relational Database Service (RDS) AWS Glue Amazon ElastiCache
Explanation Correct option: Amazon ElastiCache Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases. If EC2 instances are intensively reading data from a database, ElastiCache can cache some values to take the load off the database. How Amazon ElastiCache works: via - https://aws.amazon.com/elasticache/)) Incorrect options: Amazon Relational Database Service (RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. It cannot be used to take the load off databases. However, ElastiCache is often used with RDS to take the load off RDS. AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. It cannot be used to take the load off the databases. Amazon EMR - Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. It cannot be used to take the load off the databases. Reference: https://aws.amazon.com/elasticache/
Which of the following AWS Support plans is the MOST cost-effective when getting enhanced technical support by Cloud Support Engineers? Basic Developer Enterprise Business
Explanation Correct option: Business AWS recommends Business Support if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. It is also the cheapeast support plan to provide enhanced technical support by Cloud Support Engineers. AWS Business Support Plan Offerings: Exam Alert: Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam: via - https://aws.amazon.com/premiumsupport/plans/ Incorrect options: Developer - AWS recommends Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test. It provides enhanced technical support, but by Cloud Support Associates. Basic - A basic support plan is included for all AWS customers. It does not provide enhanced technical support. Enterprise - AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. It provides enhanced technical support by Cloud Support Engineers, but is more expensive than the Business support plan. References: https://aws.amazon.com/premiumsupport/plans/ https://aws.amazon.com/premiumsupport/plans/business/
A production company with predictable usage would like to reduce the cost of its Amazon EC2 instances by using reserved instances. Which of the following length terms are available for Amazon EC2 reserved instances? (Select TWO) 1 year 2 years 5 years 6 months 3 years
Explanation Correct option: 1 year 3 years Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. Besides, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them. Standard and Convertible reserved instances can be purchased for a 1-year or 3-year term. EC2 Pricing Options Overview: via - https://aws.amazon.com/ec2/pricing/ Incorrect options: 6 months - It is not possible to reserve instances for 6 months. 5 years - It is not possible to reserve instances for 5 years. 2 years - It is not possible to reserve instances for 2 years. Reference: https://aws.amazon.com/ec2/pricing/
A company needs to keep sensitive data in its own data center due to compliance but would still like to deploy resources using AWS. Which Cloud deployment model does this refer to? Public Cloud On-premises Hybrid Cloud Private Cloud
Explanation Correct option: Hybrid Cloud A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend, and grow, an organization's infrastructure into the cloud while connecting cloud resources to the internal system. Overview of Cloud Computing Deployment Models: via - https://aws.amazon.com/types-of-cloud-computing/ Incorrect options: Public Cloud - A public cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing. Private Cloud - Unlike a Public cloud, a Private cloud enables businesses to avail IT services that are provisioned and customized according to their precise needs. The business can further avail the IT services securely and reliably over a private IT infrastructure. On-premises - This is not a cloud deployment model. When an enterprise opts for on-premises,it needs to create, upgrade, and scale the on-premise IT infrastructure by investing in sophisticated hardware, compatible software, and robust services. Also, the business needs to deploy dedicated IT staff to upkeep, scale, and manage the on-premise infrastructure continuously. Reference: https://aws.amazon.com/what-is-cloud-computing/
A brand new startup would like to remove its need to manage the underlying infrastructure and focus on the deployment and management of its applications. Which type of Cloud Computing does this refer to? Software as a Service (SaaS) On-premises Infrastructure as a Service (IaaS) Platform as a Service (PaaS)
Explanation Correct option: Platform as a Service (PaaS) Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS). PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don't need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Please review this overview of the types of Cloud Computing: via - https://aws.amazon.com/types-of-cloud-computing/ Incorrect options: Infrastructure as a Service (IaaS) - IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Software as a Service (SaaS) - SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don't have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. AWS Rekognition is an example of a SaaS service. On-premises - When an enterprise opts for on-premises, it needs to create, upgrade, and scale the on-premise IT infrastructure by investing in sophisticated hardware, compatible software, and robust services. Also, the business needs to deploy dedicated IT staff to upkeep, scale, and manage the on-premise infrastructure continuously. Reference: https://aws.amazon.com/types-of-cloud-computing/ Course content Course content Overview Q&AQuestions and answers Notes Announcements Reviews Learning tools
Which Amazon EC2 Auto Scaling feature can help with fault tolerance? Replacing unhealthy EC2 instances Having the right amount of computing capacity Lower cost by adjusting the number of EC2 instances Distributing load to EC2 instances
Explanation Correct option: Replacing unhealthy EC2 instances Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use the dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and replace it with a new one. Incorrect options: Lower cost by adjusting the number of EC2 instances - Amazon EC2 Auto Scaling adds instances only when needed, and can scale across purchase options to optimize performance and cost. However, this will not help with fault tolerance. Distributing load to EC2 instances - Even though this helps with fault tolerance and is often used with Amazon EC2 Auto Scaling, it is a feature of Elastic Load Balancing (ELB) and not Amazon EC2 Auto Scaling. Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Having the right amount of computing capacity - Amazon EC2 Auto Scaling ensures that your application always has the right amount of compute, so your application can handle the workload. Reference: https://aws.amazon.com/ec2/autoscaling/
According to the Well-Architected Framework, which of the following statements are recommendations in the Operational Excellence pillar? (Select two) Enable traceability Use serverless architectures Anticipate failure Automatically recover from failure Make frequent, small, reversible changes
Explanation Correct option: Anticipate failure Make frequent, small, reversible changes The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. Perform "pre-mortem" exercises to identify potential sources of failure so that they can be removed or mitigated. Test your failure scenarios and validate your understanding of their impact. Test your response procedures to ensure that they are effective, and that teams are familiar with their execution. Set up regular game days to test workloads and team responses to simulated events. Design workloads to allow components to be updated regularly. Make changes in small increments that can be reversed if they fail (without affecting customers when possible). The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability. Overview of the six pillars of the Well-Architected Framework: via - https://aws.amazon.com/architecture/well-architected/ Incorrect options: Enable traceability - Monitor, alert, and audit actions and changes to your environment in real-time. Integrate logs and metrics with systems to automatically respond and take action. It is a design principle of the Security pillar. Automatically recover from failure - By monitoring a system for key performance indicators (KPIs), you can trigger automation when a threshold is breached. This allows for automatic notification and tracking of failures, and for automated recovery processes that work around or repair the failure. With more sophisticated automation, it's possible to anticipate and remediate failures before they occur. It is a design principle of the Reliability pillar. Use serverless architectures - In the cloud, serverless architectures remove the need for you to run and maintain servers to carry out traditional compute activities. For example, storage services can act as static websites, removing the need for web servers, and event services can host your code for you. This not only removes the operational burden of managing these servers, but also can lower transactional costs because these managed services operate at cloud scale. It is a design principle of the Performance Efficiency pillar. Reference: https://wa.aws.amazon.com/index.en.html
Adding more CPU/RAM to an Amazon EC2 instance represents which of the following? Horizontal scaling Managing increasing volumes of data Loose coupling Vertical scaling
Explanation Correct option: Vertical scaling A "vertically scalable" system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Incorrect options: Horizontal scaling - A "horizontally scalable" system is one that can increase capacity by adding more computers to the system. Managing increasing volumes of data - Traditional data storage and analytics tools can no longer provide the agility and flexibility required to deliver relevant business insights. That's why many organizations are shifting to a data lake architecture. A data lake is an architectural approach that allows you to store massive amounts of data in a central location so that it's readily available to be categorized, processed, analyzed, and consumed by diverse groups within your organization. Loose coupling - As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. Reference: https://wa.aws.amazon.com/wat.concept.horizontal-scaling.en.html
A Cloud Practitioner would like to get operational insights of its resources to quickly identify any issues that might impact applications using those resources. Which AWS service can help with this task? AWS Personal Health Dashboard AWS Trusted Advisor AWS Systems Manager Amazon Inspector
Explanation Correct option: AWS Systems Manager AWS Systems Manager allows you to centralize operational data from multiple AWS services and automate tasks across your AWS resources. You can create logical groups of resources such as applications, different layers of an application stack, or production versus development environments. With Systems Manager, you can select a resource group and view its recent API activity, resource configuration changes, related notifications, operational alerts, software inventory, and patch compliance status. You can also take action on each resource group depending on your operational needs. Systems Manager provides a central place to view and manage your AWS resources, so you can have complete visibility and control over your operations. How AWS Systems Manager works: via - https://aws.amazon.com/systems-manager/ Incorrect options: Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It is not used to get operational insights of AWS resources. AWS Personal Health Dashboard - AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that might affect you. It is not used to get operational insights of AWS resources. AWS Trusted Advisor - AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. It is not used to get operational insights of AWS resources. Reference: https://aws.amazon.com/systems-manager/
A research lab needs to be notified in case of a configuration change for security and compliance reasons. Which AWS service can assist with this task? AWS Config AWS Trusted Advisor AWS Secrets Manager Amazon Inspector
Explanation Correct option: AWS Config AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. How AWS Config works: via - https://aws.amazon.com/config/ Incorrect options: Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. It cannot notify configuration changes. AWS Trusted Advisor - AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. It cannot notify configuration changes. AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. It cannot notify configuration changes. Reference: https://aws.amazon.com/config/
Which of the following statements is CORRECT regarding the scope of an Amazon Virtual Private Cloud (VPC)? A VPC spans all Availability Zones (AZs) in all regions A VPC spans all Availability Zones (AZs) within a region A VPC spans all subnets in all regions A VPC spans all regions within an Availability Zone (AZ)
Explanation Correct option: A VPC spans all Availability Zones (AZs) within a region Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. A VPC spans all Availability Zones (AZs) within a region. Incorrect options: A VPC spans all subnets in all regions - A VPC is located within a region. A VPC spans all Availability Zones (AZs) in all regions - A VPC is located within a region. A VPC spans all regions within an Availability Zone (AZ) - AWS has the concept of a Region, which is a physical location around the world where AWS clusters data centers. Each AWS Region consists of multiple (two or more), isolated, and physically separate AZ's within a geographic area. An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Therefore, regions cannot be within an Availability Zone. Moreover, a VPC is located within a region. AWS Regions and Availability Zones Overview: via - https://aws.amazon.com/about-aws/global-infrastructure/regions_az/ Reference: https://aws.amazon.com/vpc/
A production company would like to establish an AWS managed VPN service between its on-premises network and AWS. Which item needs to be set up on the company's side? A VPC endpoint interface A security group A customer gateway A virtual private gateway
Explanation Correct option: A customer gateway A customer gateway device is a physical or software appliance on your side of a Site-to-Site VPN connection. You or your network administrator must configure the device to work with the Site-to-Site VPN connection. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. Schema: via - https://docs.aws.amazon.com/vpn/latest/s2svpn/your-cgw.html Incorrect options: A security group - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. It is not a component of a connection between on-premises network and AWS. A VPC endpoint interface - An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink. It is not a component of a connection between on-premises network and AWS. A virtual private gateway - A virtual private gateway device is a physical or software appliance on AWS side of a Site-to-Site VPN connection. References: https://docs.aws.amazon.com/vpn/latest/s2svpn/your-cgw.html https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html
Which AWS tool/service will help you define your cloud infrastructure using popular programming languages such as Python and JavaScript? AWS CodeBuild AWS CloudFormation AWS Cloud Development Kit (CDK) AWS Elastic Beanstalk
Explanation Correct option: AWS Cloud Development Kit (CDK) - The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define your cloud application resources using familiar programming languages. AWS CDK uses the familiarity and expressive power of programming languages for modeling your applications. It provides you with high-level components called constructs that preconfigure cloud resources with proven defaults, so you can build cloud applications without needing to be an expert. AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. It also enables you to compose and share your own custom constructs that incorporate your organization's requirements, helping you start new projects faster. In short, you use the AWS CDK framework to author AWS CDK projects which are executed to generate CloudFormation templates. How CDK works: via - https://aws.amazon.com/cdk/ Incorrect options: AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python etc. You can simply upload your code in a programming language of your choice and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. AWS CloudFormation - AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources, and provision and manage them in an orderly and predictable fashion. AWS CloudFormation is designed to allow resource lifecycles to be managed repeatably, predictable, and safely, while allowing for automatic rollbacks, automated state management, and management of resources across accounts and regions. AWS CDK helps code the same in higher-level languages and converts them into CloudFormation templates. AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don't need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. Reference: https://aws.amazon.com/cdk/
A Cloud Practitioner would like to deploy identical resources across all regions and accounts using templates while estimating costs. Which AWS service can assist with this task? AWS CloudFormation AWS CodeDeploy Amazon LightSail AWS Directory Service
Explanation Correct option: AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use the AWS CloudFormation sample templates or create your own templates to describe your AWS resources, and any associated dependencies or runtime parameters, required to run your application. This provides a single source of truth for all your resources and helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting. CloudFormation templates allow you to estimate the cost of your resources. How AWS CloudFormation works: via - https://aws.amazon.com/cloudformation/ Incorrect options: AWS Directory Service - AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud. It is not used to deploy resources. Amazon LightSail - Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS. It is not best suited when deploying more complex resources, while CloudFormation can. AWS CodeDeploy - AWS CodeDeploy is a service that automates code deployments to any instance, including EC2 instances and instances running on-premises. Unlike CloudFormation, it does not deal with infrastructure configuration and orchestration. Reference: https://aws.amazon.com/cloudformation/
Which of the following AWS services can be used to generate, use, and manage encryption keys on the AWS Cloud? Amazon Inspector AWS GuardDuty AWS Secrets Manager AWS CloudHSM
Explanation Correct option: AWS CloudHSM The AWS CloudHSM service helps you meet corporate, contractual, and regulatory compliance requirements for data security by using a dedicated Hardware Security Module (HSM) instances within the AWS cloud. CloudHSM allows you to securely generate, store, and manage cryptographic keys used for data encryption in a way that keys are accessible only by you. How AWS CloudHSM works: via - https://aws.amazon.com/cloudhsm/ Incorrect options: Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. It cannot be used to generate, use, and manage encryption keys. AWS GuardDuty - Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It cannot be used to generate, use, and manage encryption keys. AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. It is integrated with AWS CloudHSM to generate, use, and manage encryption keys. Reference: https://aws.amazon.com/cloudhsm/
Which AWS service can be used to view the most comprehensive billing details for the past month? AWS Pricing Calculator AWS Budgets AWS Cost Explorer AWS Cost and Usage Reports
Explanation Correct option: AWS Cost & Usage Reports The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS Cost and Usage Reports Overview: via - https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html Incorrect options: AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot provide billing details for the past month. AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot provide granular billing details for the past month. AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out setting up a new set of instances and services. AWS Pricing Calculator cannot provide billing details for the past month. Exam Alert: Please review the differences between "AWS Cost and Usage Reports" and "AWS Cost Explorer". Think of "AWS Cost and Usage Reports" as a cost management tool providing the most detailed cost and usage data for your AWS account. It can provide reports that break down your costs by the hour into your S3 bucket. On the other hand, "AWS Cost Explorer" is more of a high-level cost management tool that helps you visualize the costs and usage associated with your AWS account. "AWS Cost Explorer" vs "AWS Cost and Usage Reports": via - https://aws.amazon.com/aws-cost-management/aws-cost-explorer/ via - https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/ References: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html https://aws.amazon.com/aws-cost-management/aws-cost-explorer/ https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/
A start-up would like to monitor its cost on the AWS Cloud and would like to choose an optimal Savings Plan. As a Cloud Practitioner, which AWS service would you use? AWS Budgets AWS Cost Explorer AWS Pricing Calculator AWS Cost and Usage Reports
Explanation Correct option: AWS Cost Explorer AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account, etc.) AWS Cost Explorer also gives you access to a set of default reports to help you get started, while also allowing you to create custom reports from scratch. Customers can receive Savings Plan recommendations at the member (linked) account level in addition to the existing AWS organization-level recommendations in AWS Cost Explorer. Incorrect options: AWS Cost and Usage Reports - The AWS Cost & Usage Report is a single location for accessing comprehensive information about your AWS costs and usage. It does not provide Savings Plan recommendations. AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services, and create an estimate for the cost of your use cases on AWS. It does not provide Savings Plan recommendations. AWS Budgets - AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set RI utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. It does not provide Savings Plan recommendations. Exam Alert: Please review the differences between "AWS Cost and Usage Reports" and "AWS Cost Explorer". Think of "AWS Cost and Usage Reports" as a cost management tool providing the most detailed cost and usage data for your AWS account. It can provide reports that break down your costs by the hour into your S3 bucket. On the other hand, "AWS Cost Explorer" is more of a high-level cost management tool that helps you visualize the costs and usage associated with your AWS account. "AWS Cost Explorer" vs "AWS Cost and Usage Reports": via - https://aws.amazon.com/aws-cost-management/aws-cost-explorer/ 