AWS Certified Cloud Practitioner Exam Study Questions

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

AWS offers a free-tier option for a duration of ____ months.

12

By default, which timeframe does CloudWatch provide free analysis metrics?

5 minutes - By default, CloudWatch analyzes AWS resources for metrics every 5 minutes for free.

How many VPCs can an EC2 instance be attached to at a time?

1 - An EC2 instance can only be attached to 1 VPC at a time.

How many VPCs are created by default in a region?

1 - By default, when an AWS account is created, each region will get 1 VPC.

Which of the following is not a method of getting or using MFA codes?

Single sign-on

Which of the following is not a rule that can be managed within the password policy management of IAM?

Username length

Which of the following are benefits of CloudFront?

Integrates with Route53 High transfer speeds Global Content caching DDoS protection

An AWS customer has used one Amazon Linux instance for 2 hours, 5 minutes and 9 seconds, and one Windows instance for 4 hours, 23 minutes and 7 seconds. How much time will the customer be billed for?

2 hours, 5 minutes and 9 seconds for the Linux instance and 5 hours for the Windows instance - With per-second billing in EC2 you pay for only what you use. It takes cost of unused minutes and seconds in an hour off of the bill, so you can focus on improving your applications instead of maximizing usage to the hour. Per-second billing is available for instances launched in Amazon Linux or Ubuntu. For other instances, including Windows, each partial instance-hour consumed will be billed as a full hour. In this case, the customer will be charged for 2 hours, 5 minutes and 9 seconds for the Linux instance, and 5 hours for the Windows instance.

What is a NACL?

A firewall on the subnet level - A NACL is a firewall on the subnet level.

What is a VPC?

A logically isolated section of the AWS Cloud - A VPC is a logically isolated section of the AWS Cloud for personal use.

Which of the following are true statements about public subnets and private subnets?

A public subnet has a route table pointing to an Internet Gateway A private subnet does not have a route table pointed to an Internet Gateway

Which of the following is descriptive of an Internet Gateway?

A route to and from the internet - An Internet Gateway is a route to the internet for instances within a VPC.

Who from the following will get the largest discount?

A user who chooses to buy Reserved, Standard, All upfront instances - Reserved instance types include: - Standard RIs: These provide the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage. - Convertible RIs: These provide a discount (up to 54% off On-Demand) and the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value. Therefore, Standard RIs provides more discounts than Convertible RIs. * Remember that when you buy Reserved Instances, the larger the upfront payment, the greater the discount. - The All Upfront option provides you with the largest discount. - The Partial Upfront option provides fewer discounts than All Upfront. - The No Upfront option provides you with the least discount.

In your on-premises environment, you can create as many virtual servers as you need from a single template. What can you use to perform the same in AWS?

AMI - An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). This pre-configured template save time and avoid errors when configuring settings to create new instances. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

How can AWS services be accessed?

API access and API AWS Management Console - (Application Programming Interface) allows for access to AWS services programmatically via AWS SDKs (Software Development Kit). The AWS Management Console allows for access to AWS services using the GUI (Graphical User Interface). This is accomplished by logging into an AWS account with username and password.

Who is responsible for scaling the DynamoDB databases?

AWS - DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don't have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

A developer needs to set up an SSL security certificate for a client's eCommerce website in order to use the HTTPS protocol. Which of the following AWS services can be used to deploy the required SSL server certificates? (Choose TWO)

AWS ACM & AWS Identity & Access Management - To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use a server certificate provided by AWS Certificate Manager (ACM) or one that you obtained from an external provider. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. Amazon Route 53 is used to register domain names or use your own domain name to route your end users to Internet applications. Route 53 is not responsible for creating SSL certifications.

Which AWS service enables you to quickly purchase and deploy SSL/TLS certificates?

AWS ACM - AWS Certificate Manager (AWS ACM) is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes many of the time-consuming and error-prone steps to acquire an SSL/TLS certificate for your website or application. With a few clicks in the AWS Management Console, you can request a trusted SSL/TLS certificate from AWS. Once the certificate is created, AWS Certificate Manager takes care of deploying certificates to help you enable SSL/TLS for your website or application.

Which of the following can an AWS customer use to know more about prohibited uses of the web services offered by AWS?

AWS Acceptable Use Policy - The AWS Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the "Services") and the website located at http://aws.amazon.com (the "AWS Site"). The examples described in this Policy are not exhaustive. AWS may modify this Policy at any time by posting a revised version on the AWS Site. By using the Services or accessing the AWS Site, you agree to the latest version of this Policy. If you violate the Policy or authorize or help others to do so, AWS may suspend or terminate your use of the Services.

Which of the following AWS Services helps with planning application migration to the AWS Cloud?

AWS Application Discovery Service - AWS Application Discovery Service helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises data centers, their associated dependencies, and their performance profiles. Planning data center migrations can involve thousands of workloads that are often deeply interdependent. Application discovery and dependency mapping are important early first steps in the migration process, but these tasks are difficult to perform at scale due to the lack of automated tools. AWS Application Discovery Service automatically collects configuration and usage data from servers, storage, and networking equipment to develop a list of applications, how they perform, and how they are interdependent. This information helps reduce the complexity and time in planning your cloud migration.

A company has a web application that is hosted on a single EC2 instance and is approaching 100 percent CPU Utilization during peak loads. Rather than scaling the server vertically, the company has decided to deploy three Amazon EC2 instances in parallel and to distribute traffic across the three servers. What AWS Service should the company use to distribute the traffic evenly?

AWS Application Load Balancer (ALB) - AWS Application Load Balancer (ALB) is part of the AWS Elastic Load Balancing family that is specifically designed to handle HTTP and HTTPS traffic. An ALB automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. Once you register the Amazon EC2 instances with the ALB, it automatically distributes the incoming traffic across those instances. The Load Balancer also performs health checks on the instances and routes traffic only to the healthy ones.

Which of the following services gives you access to all AWS auditor-issued reports and certifications?

AWS Artifact - AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS' security and compliance reports and select online agreements. Reports available in AWS Artifact include AWS Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreement (NDA). a comperhenAccess all of AWS' auditor issued reports, certifications, accreditations and other third-party attestations.

Which AWS Service allows customers to download AWS SOC & PCI reports?

AWS Artifact - AWS Artifact provides on-demand downloads of AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and Service Organization Control (SOC) reports. You can submit the security and compliance documents (also known as audit artifacts) to your auditors or regulators to demonstrate the security and compliance of the AWS infrastructure and services that you use. You can also use these documents as guidelines to evaluate your own cloud architecture and assess the effectiveness of your company's internal controls.

Engineers are wasting a lot of time and effort when installing and managing batch computing software in traditional data centers. Which of the following AWS services allows them to easily run hundreds of thousands of batch computing jobs?

AWS Batch - AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory-optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.

What is the framework created by AWS Professional Services that helps organizations design a road map to successful cloud adoption?

AWS CAF - AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization, and throughout your IT lifecycle. Using the AWS CAF helps you realize measurable business benefits from cloud adoption faster and with less risk.

Which of the following can be used to enable the Virtual Multi-Factor Authentication? (Choose TWO)

AWS CLI & AWS Identity and Access Management (IAM) - You can use either the AWS IAM console or the AWS CLI to enable a virtual MFA device for an IAM user in your account.

What is the AWS tool that enables you to use scripts to manage all AWS services and resources?

AWS CLI - The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

Which AWS Service allows customers to create a template that programmatically defines policies and configurations of all AWS resources as code and so that the same template can be reused among multiple projects? · ​

AWS CloudFormation - AWS CloudFormation is a service that helps customers model and set up their Amazon Web Services resources so that they can spend less time managing those resources and more time focusing on their applications that run in AWS. Customers create a template that describes all the AWS resources that they want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning those resources for them. Also, Customers can create an AWS CloudFormation script that captures their security policies, networking policies, and other aspects of configuration and reliably deploys it. Security best practices can then be reused among multiple projects and become part of a continuous integration pipeline.

Which of the following allows you to create new RDS instances? (Choose two)

AWS CloudFormation AND AWS Management Console. - The AWS Management Console lets you create new RDS instances through a web-based user interface. You can also use AWS CloudFormation to create new RDS instances using the CloudFormation template language.

Which of the following services enables you to easily generate and use your own encryption keys in the AWS Cloud?

AWS CloudHSM - AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.

A company needs to track resource changes using the API call history. Which AWS service can help the company achieve this goal?

AWS CloudTrail - AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. With CloudTrail, you can get a history of AWS API calls for your account, including API calls made using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

An external auditor is requesting a log of all accesses to the AWS resources in the company's account. Which of the following services will provide the auditor with the requested information?

AWS CloudTrail - CloudTrail provides visibility into user activity by recording actions taken on your account. CloudTrail records important information about each action, including who made the request, the services used, the actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you to enable governance, compliance, operational auditing, and risk auditing of your AWS account. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

What is the AWS repository management system that allows for storing, versioning, and managing your application code?

AWS CodeCommit - AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images. AWS CodeCommit makes it easy for companies to host secure and highly available private Git repositories. Customers can use AWS CodeCommit to securely store anything from source code to binaries, and it works seamlessly with their existing Git tools.

An organization uses a hybrid cloud architecture to run their business, Which AWS service enables them to deploy their applications to any AWS or on-premises server?

AWS CodeDeploy - AWS CodeDeploy is a service that automates application deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands. You can also use AWS OpsWorks to automate application deployments to any instance, including Amazon EC2 instances and instances running on-premises. OpsWorks is a service that helps you automate operational tasks like code deployment, software configurations, package installations, database setups, and server scaling using Chef and Puppet.

What access privileges does a new IAM user have by default?

AWS Console login access Why: A new IAM User is only given access to login to the AWS console by default. This is because AWS follows the principle of least privilege when assigning default permissions.gning default permissions

A company is trying to analyze the costs applied to their AWS account recently. Which of the following provides them the most granular data about their AWS costs and usage?

AWS Cost & Usage Report - The AWS Cost & Usage Report contains the most comprehensive set of AWS cost and usage data available, including additional metadata about AWS services, pricing, and reservations (e.g., Amazon EC2 Reserved Instances (RIs)). The AWS Cost and Usage Report tracks your AWS usage and provides information about your use of AWS resources and estimated costs for that usage. You can configure this report to present the data hourly or daily. It is updated at least once a day until it is finalized at the end of the billing period. The AWS Cost and Usage Report gives you the most granular insight possible into your costs and usage, and it is the source of truth for the billing pipeline. It can be used to develop advanced custom metrics using business intelligence, data analytics, and third-party cost optimization tools.

What is the AWS tool that can help a company visualize their AWS spending in the last few months?

AWS Cost Explorer - The AWS Billing and Cost Management console includes the Cost Explorer tool for viewing AWS cost data as a graph. The user can filter the graphs using the resource tags. If the company is using Consolidated Billing, it generates a report based on the linked accounts which can help to identify areas that require further inquiry. Using the Cost Explorer, the company can view trends and use them to understand their spending and to predict future costs.

Which of the below options is true of Amazon VPC?

AWS Customers have complete control over their Amazon VPC virtual networking environment - Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

Which of the following AWS services helps migrate your current on-premise databases to AWS?

AWS DMS - AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

What are the connectivity options that can be used to build hybrid cloud architectures? (Choose two)

AWS Direct Connect & AWS VPN - In cloud computing, hybrid cloud refers to the use of both on-premises resources in addition to public cloud resources. A hybrid cloud enables an organization to migrate applications and data to the cloud, extend their datacenter capacity, utilize new cloud-native capabilities, move applications closer to customers, and create a backup and disaster recovery solution with cost-effective high availability. By working closely with enterprises, AWS has developed the industry's broadest set of hybrid capabilities across storage, networking, security, application deployment, and management tools to make it easy for you to integrate the cloud as a seamless and secure extension of your existing investments.

Which AWS Service can be used to establish a dedicated, private network connection between AWS and your datacenter?

AWS Direct Connect - AWS Direct Connect is used to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or co-location environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

A company is seeking to deploy an existing .NET application onto AWS as quickly as possible. Which AWS Service should the customer use to achieve this goal?

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

A developer wants to quickly deploy and manage his application in the AWS Cloud, but he doesn't have any experience with cloud computing. Which of the following AWS services would help him achieve his goal?

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

You have just finished writing your application code. Which service can be used to automate the deployment and scaling of your application?

AWS Elastic Beanstalk - AWS Elastic Beanstalk is considered a Platform as a Service (PaaS). it is an easy-to-use service for deploying, scaling and updating web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

App development companies move their business to AWS to reduce time-to-market and improve customer satisfaction, what are the AWS automation tools that help them deploy their applications faster? (Choose two)

AWS Elastic Beanstalk AND AWS Cloud​Formation - AWS Elastic Beanstalk makes it easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. AWS CloudFormation automates and simplifies the task of repeatedly and predictably creating groups of related resources that power your applications. Creating and interconnecting all resources your application needs to run is now as simple as creating a single EC2 or RDS instance.

Your company requires a response time of less than 15 minutes from support interactions about their business-critical systems that are hosted on AWS if those systems go down. Which AWS Support Plan should this company use?

AWS Enterprise Support - AWS support plans provide different response times based on the case's severity. For example, the Enterprise plan provides General Guidance within 24 hours. However, if the case involves a business-critical system being down, the company will get a response within 15 minutes.

Which of the following compute resources are serverless? (Choose two)

AWS Fargate AND AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume, and there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. AWS Fargate is a compute engine for deploying and managing containers, which frees you from having to manage any of the underlying infrastructure. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. AWS Fargate seamlessly integrates with Amazon ECS, so you can deploy and manage containers without having to provision or manage servers.

Which service can you use to route traffic to the endpoint that provides the best application performance for your users worldwide?

AWS Global Accelerator is a networking service that improves the availability and performance of the applications that you offer to your global users. Today, if you deliver applications to your global users over the public internet, your users might face inconsistent availability and performance as they traverse through multiple public networks to reach your application. These public networks can be congested and each hop can introduce availability and performance risk. AWS Global Accelerator uses the highly available and congestion-free AWS global network to direct internet traffic from your users to your applications on AWS, making your users' experience more consistent. To improve the availability of your application, you must monitor the health of your application endpoints and route traffic only to healthy endpoints. AWS Global Accelerator improves application availability by continuously monitoring the health of your application endpoints and routing traffic to the closest healthy endpoints.

Which AWS Service is used to manage user permissions?

AWS IAM - AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow or deny their access to AWS resources.

A company has infrastructure hosted in an on-premises data center. They currently have an operations team that takes care of ID management. If they decide to migrate to the AWS cloud, which of the following services would help them perform the same role in AWS?

AWS IAM - AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.

An organization runs many systems and uses many AWS products. Which of the following services enables them to control how each developer interacts with these products?

AWS Identity and Access Management - AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.

Which of the following would you use to manage your encryption keys in the AWS Cloud? (Choose two)

AWS KMS & CloudHSM - AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses FIPS 140-2 validated hardware security modules to protect the security of your keys. AWS Key Management Service is integrated with most other AWS services to help you protect the data you store with these services. AWS Key Management Service is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.

Which AWS Service is used to manage the keys used to encrypt customer data?

AWS KMS - AWS Key Management Service (AWS KMS) is a managed service that enables customers to easily create and control the keys used for cryptographic operations. The service provides a highly available key generation, storage, management, and auditing solution for customers to encrypt or digitally sign data within their applications or to control the encryption of data across AWS services.

Which of the following services is used during the process of encrypting EBS volumes?

AWS KMS - Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. You can use the AWS Key Management Service (AWS KMS) to create and control the encryption keys used to encrypt your data. AWS Key Management Service is also integrated with other AWS services including Amazon S3, and Amazon Redshift, to make it simple to encrypt your data with encryption keys that you manage.

What is the AWS Compute service that executes code only when triggered by events?

AWS Lambda - AWS Lambda is a serverless compute service that runs code in response to events. For example, you can create a Lambda function that creates thumbnail images when users upload images to Amazon S3. The Lambda event, in this case, will be the user's uploads. Once a user uploads an image to Amazon S3, AWS Lambda will automatically run the application and creates a thumbnail for that image.

Which of the following AWS services can be used as a compute resource? (Choose two)

AWS Lambda AND Amazon EC2 - AWS Lambda is a Serverless computing service. Serverless computing allows you to build and run applications and services without thinking about servers. With serverless computing, your application still runs on servers, but all the server management is done by AWS. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, and resizable compute capacity in the cloud. Unlike AWS Lambda, Amazon EC2 is a server-based computing service, the Customer is responsible for performing all server configurations and management tasks.

Which of the following is NOT a benefit of using AWS Lambda?

AWS Lambda provides resizable compute capacity in the cloud - The option"AWS Lambda provides resizable compute capacity in the cloud" is not a benefit of AWS Lambda, and thus is the correct choice. AWS Lambda automatically runs your code without requiring you to adjust capacity or manage servers. AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload. Other options represent benefits of AWS Lambda, and thus are not correct. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume—there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service—all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services, or you can call it directly from any web or mobile app.

Where to go to search for and buy third-party software solutions and services that run on AWS?

AWS Marketplace - AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, deploy, and manage third-party software and services that customers need to build solutions and run their businesses. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps. AWS Marketplace also simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. Customers can quickly launch pre-configured software with just a few clicks, and choose software solutions in AMI and SaaS formats, as well as other formats. Flexible pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL, and get billed from one source, AWS.

Which AWS Service provides integration with Chef to automate the configuration of EC2 instances?

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

Which of the following AWS services uses Puppet to automate how EC2 instances are configured?

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

In this scenario, we want to consolidate billing between multiple AWS accounts. Which of these services would help us accomplish this goal?

AWS Organizations - AWS Organizations can be used to consolidate billing between multiple AWS accounts.

A global company with a large number of AWS accounts is seeking a way in which they can centrally manage billing and security policies across all accounts. Which AWS Service will assist them in meeting these goals?

AWS Organizations - AWS Organizations helps customers centrally govern their environments as they grow and scale their workloads on AWS. Whether customers are a growing startup or a large enterprise, Organizations helps them to centrally manage billing; control access, compliance, and security; and share resources across their AWS accounts. AWS Organizations has five main benefits: 1) Centrally manage access polices across multiple AWS accounts. 2) Automate AWS account creation and management. 3) Control access to AWS services. 4) Consolidate billing across multiple AWS accounts. 5) Configure AWS services across multiple accounts.

What is the AWS service that enables you to manage all of your AWS accounts from a single master account?

AWS Organizations - AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations enables the following capabilities: 1- Automate AWS account creation and management 2- Consolidate billing across multiple AWS accounts 3- Govern access to AWS services, resources, and regions 4- Centrally manage access policies across multiple AWS accounts 5- Configure AWS services across multiple accounts

In regards to AWS Organizations, which of the following is true?

AWS Organizations provides policy-based management for multiple AWS accounts. - AWS Organizations do indeed provide policy-based management for multiple AWS accounts.

Which methods can be used by customers to interact with AWS Identity and Access Management (IAM)? (Choose TWO)

AWS SDKs & AWS CLI - Customers can work with AWS Identity and Access Management in any of the following ways: 1- AWS Management Console: The console is a browser-based interface that can be used to manage IAM and AWS resources. 2- AWS Command Line Tools: Customers can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. 3- AWS SDKs: AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically.

As part of the AWS Migration Acceleration Program (MAP), what does AWS provide to accelerate Enterprise adoption of AWS? (Choose TWO)

AWS Partners AND AWS Professional Services - AWS has helped thousands of organizations, including enterprises such as GE, the Coca-Cola Company, BP, Enel, Samsung, NewsCorp, and Twenty-First Century Fox, migrate to the cloud and free-up resources by lowering IT costs while improving productivity, operational resiliency, and business agility. The AWS Migration Acceleration Program (MAP) is designed to help enterprises that are committed to a migration journey achieve a range of these business benefits by migrating existing workloads to Amazon Web Services. MAP has been created to provide consulting support, training and services credits to reduce the risk of migrating to the cloud, build a strong operational foundation and help offset the initial cost of migrations. It includes a migration methodology for executing legacy migrations in a methodical way as well as robust set of tools to automate and accelerate common migration scenarios. By migrating to AWS, enterprises will be able to focus on business innovation instead of dedicating time and attention to maintaining their existing systems and technical debt. Sacrifices and painful trade-offs no longer have to be made to get something to market quickly. Instead, enterprises can focus on differentiating their business in the marketplace and taking advantage of new capabilities. By building the foundation to operate mission critical workloads on AWS, you will build capabilities that can be leveraged across a variety of projects. AWS have a number of resources to support and sustain your migration efforts including an experienced partner ecosystem to execute migrations, AWS Professional Services team to provide best practices and prescriptive advice and a training program to help IT professionals understand and carry out migrations successfully.

Which of the following AWS Tools will be replacing the AWS Simple Calculator?

AWS Pricing Calculator - The AWS Pricing Calculator will be replacing the AWS Simple Calculator.

Which AWS Group assists customers in achieving their desired business outcomes?

AWS Professional Services - Moving to AWS provides customers with sustainable business advantages. Choosing to supplement teams with specialized skills and experience can help customers achieve those results. The AWS Professional Services organization is a global team of experts that helps customers realize their desired business outcomes when using AWS.

VPCs span all of these except for _________________

AWS Regions - VPCs cannot span AWS Regions.

You manage a blog on AWS that has different stages such as development, testing, and production. How can you create a custom console in each stage to view and manage your resources easily?

AWS Resource Groups - If you work with multiple resources in multiple stages, you might find it useful to manage all the resources in each stage as a group rather than move from one AWS service to another for each task. Resource Groups help you do just that. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use.

Which of the following security resources are available for free? (Choose two)

AWS Security Blog AND AWS Bulletins - The AWS free security resources include AWS Security Blog, Whitepapers, Developer Documents, Articles and Tutorials, Training, Security Bulletins, Compliance Resources and Testimonials.

You need to migrate a large number of on-premises workloads to AWS. Which of the following is the fastest way to achieve your goal?

AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

Which AWS Service provides the current status of all AWS Services in all AWS Regions?

AWS Service Health Dashboard - AWS uses the Service Health Dashboard to publish most up-to-the-minute information on AWS service availability. You can get information about the current status and availability of any AWS service any time using the AWS Service Health Dashboard that is available at this link: https://status.aws.amazon.com/

What can you access by visiting the URL: http://status.aws.amazon.com?

AWS Service Health Dashboard - The AWS Service Health Dashboard publishes AWS' most up-to-the-minute information on service availability. The dashboard provides access to current status and historical data about each and every Amazon Web Service. Just copy the URL to your browser and see the result.

In this scenario, we want to calculate our anticipated bill for resources we are about to create, which of the following tools can accomplish this task?

AWS Simple Calculator - The AWS Simple Calculator is used to calculate anticipated billing.

A company is planning to use Amazon S3 and Amazon CloudFront to distribute its video courses globally. What tool can the company use to estimate the costs of these services? ​

AWS Simple Monthly Calculator - Explanation The AWS Simple Monthly Calculator helps you estimate your monthly AWS bill more efficiently. The calculator can be used to determine your best and worst case scenarios and identify areas of development to reduce your monthly costs. The AWS Simple Monthly Calculator is continuously updated with the latest pricing for all AWS services in all Regions.

You want to transfer 200 Terabytes of data from on-premises locations to the AWS Cloud, which of the following can do the job in a cost effective way?

AWS Snowball - Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can cost as little as one-fifth the cost of using high-speed Internet. In the US regions, Snowball appliances come in two sizes: 50 TB and 80 TB. All other regions have 80 TB Snowballs only. In either case, it is better (cost-effective) to use 3 or 4 snowball devices to transfer 200 TB. 3 snowballs * 80TB = 240 TB 4 snowballs * 50 TB = 200 TB There are many options for transferring your data into AWS. Snowball is intended for transferring large amounts of data. If you want to transfer less than 10 terabytes of data between your on-premises data centers and Amazon S3, Snowball might not be your most economical choice.

What AWS tools can be used to call AWS Services from different programming languages?

AWS Software Development Kit - The AWS Software Development Kit (AWS SDK) can simplify using AWS services in your applications with an API tailored to your programming language or platform. Programming languages supported include Java, .NET, Node.js, PHP, Python, Ruby, Go, and C++.

Which AWS Service helps enterprises extend their on-premises storage to AWS in a cost-effective manner?

AWS Storage Gateway - Enterprises can extend their on-premises storage to AWS Cloud for long-term backup retention and archiving, optimizing costs and increasing resilience and availability. AWS Storage Gateway is a hybrid storage service that enables on-premises applications to seamlessly use AWS cloud storage. Enterprises can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration. The storage gateway connects to AWS storage services, such as Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, Amazon EBS, and AWS Backup, providing storage for files, volumes, snapshots, and virtual tapes in AWS.

​What is the AWS Support feature that allows customers to manage support cases programmatically?

AWS Support API - The AWS Support API provides programmatic access to AWS Support Center features to create, manage, and close support cases, and operationally manage Trusted Advisor check requests and status. AWS provides access to AWS Support API for AWS Support customers who have a Business or Enterprise support plan. The service currently provides two different groups of operations: 1- Support Case Management operations to manage the entire life cycle of AWS support cases, from creating a case to resolving it. 2- Trusted Advisor operations to access the checks provided by AWS Trusted Advisor.

Which tool can a non-AWS customer use to compare the cost of his current on-premises environment to AWS?

AWS TCO Calculator - AWS TCO Calculator helps you evaluate the savings from using AWS and compare an AWS Cloud environment to on-premises and co-location environments. This tool considers all the costs to run a solution, including physical facilities, power, and cooling, to provide a realistic, end-to-end comparison of your costs.

TYMO Cloud Corp is looking forward to migrating their entire on-premises data center to AWS. What tool can they use to perform a cost-benefit analysis of moving to the AWS Cloud?

AWS TCO Calculator - The AWS TCO (Total Cost of Ownership) Calculator is a free tool that provides directional guidance on possible realized savings when deploying AWS. This tool is built on an underlying calculation model, that generates a fair assessment of value that a customer may achieve given the data provided by the user which includes the number of servers migrated to AWS, the server type, the number of processors and so on.

Which of the following makes it easier for you to manage and filter your resources?

AWS Tagging - Amazon Web Services (AWS) allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by purpose, owner, environment, or other criteria.

Which AWS service provides cost-optimization recommendations?

AWS Trusted Advisor - AWS Trusted Advisor is an application that draws upon best practices learned from AWS' aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill. AWS Trusted Advisor also provide recommendations to improve system performance, and close security gaps.

Which of the following services provide real-time auditing for compliance and vulnerabilities? (Choose two)

AWS Trusted Advisor AND AWS Config - Services like AWS Config, Amazon Inspector, and AWS Trusted Advisor continually monitor for compliance or vulnerabilities giving you a clear overview of which IT resources are in compliance, and which are not. With AWS Config rules you will also know if some component was out of compliance even for a brief period of time, making both point-in-time and period-in-time audits very effective.

Which of the following services can be used to monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront?

AWS WAF - AWS WAF is a web application firewall that lets customers monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. AWS WAF also lets customers control access to their content by defining customizable web security rules.

What are the services that AWS provide to protect against network and application layer DDoS attacks? (Choose two)

AWS Web Application Firewall AND Amazon CloudFront - Amazon CloudFront, AWS Shield, AWS Web Application Firewall (WAF), and Amazon Route 53 work seamlessly together to create a flexible, layered security perimeter against multiple types of attacks including network and application layer DDoS attacks. All of these services are co-resident at the AWS edge location and provide a scalable, reliable, and high-performance security perimeter for your applications and content.

Your web application currently faces performance issues and suffers from long delays. Which of the following could help you in this situation?

AWS X-Ray - AWS X-Ray helps you identify performance bottlenecks. X-Ray's service maps let you see relationships between services and resources in your application in real time. You can easily detect where high latencies are occurring, visualize node and edge latency distribution for services, and then drill down into the specific services and paths impacting application performance.

There is a requirement to grant a DevOps team full administrative access to all resources in an AWS account. Who can grant them these permissions?

AWS account owner - The account owner is the entity that has complete control over all resources in his AWS account.

Which of the following requires an access key and a security access key to get programmatic access to AWS resources? (Choose two)

AWS account root user AND IAM user - An AWS IAM user might need to make API calls or use the AWS CLI. In that case, you need to create an access key (access key ID and a secret access key) for that user. You can create IAM user access keys with the IAM console, AWS CLI,or AWS API. To create access keys for your AWS account root user, you must use the AWS Management Console.

The owner of an E-Commerce application notices that the computing workloads vary heavily from time to time. What makes AWS more economical than traditional data centers for this type of application?

AWS allows customers to launch and terminate EC2 instances based on demand - On-Demand Instances have no contract commitment and can be launched (or terminated) as needed. You are charged by the second based on an hourly rate and you pay only for what you use. This makes them ideal for applications with short-term or irregular workloads.

Which of the following statements describes the AWS Cloud's agility?

AWS allows you to provision resources in minutes - In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks (or months in some cases) to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower. In other words, instead of waiting weeks or months for hardware, you can instantly deploy new applications. Also, whether you need one virtual server or thousands, whether you need them for a few hours or 24/7, you still only pay for what you use.

Which statement is true in relation to security?

AWS cannot access users' data - AWS has no idea about the user data and cannot read any data even if they wanted to. All data are protected by the customer access keys and secret access keys and the user's encryption methods.

The TCO gap between AWS infrastructure and traditional infrastructure has widened over the recent years. Which of the following could be the reason for that?

AWS continues to lower the cost of cloud computing for its customers - AWS continues to lower the cost of cloud computing for its customers, making everything from web apps to big data on AWS even more cost-effective and widening the TCO (Total Cost of Ownership) gap with traditional infrastructure. Since 2014, AWS has reduced the cost of compute by an average of 30%, storage by an average of 51% and relational databases by an average of 28%.

Data security is one of the top priorities of AWS. How does AWS deal with old storage devices that have reached the end of their useful life?

AWS destroys the old devices in accordance with industry-standard practices. - When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. AWS uses specific techniques to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

How does AWS support customer compliance?

AWS has achieved a number of common assurance certifications such as ISO 9001 and HIPAA - AWS environments are continuously audited, and its infrastructure and services are approved to operate under several compliance standards and industry certifications across geographies and industries, including PCI DSS, ISO 2700, ISO 9001, and HIPAA. You can use these certifications to validate the implementation and effectiveness of AWS security controls. For example, AWS companies that use AWS products and services to handle credit card information can rely on AWS technology infrastructure as they manage their PCI DSS compliance certification.

Which statement best describes AWS?

AWS is a cloud services provider. - Amazon Web Services offers reliable, scalable, and inexpensive cloud computing services.

Which of the following is equivalent to a user name and password and is used to authenticate your programmatic access to AWS services and APIs?

Access Keys - Access keys consist of two parts: an access key ID and a secret access key. You use access keys to sign programmatic requests that you make to AWS if you use AWS CLI commands (using the SDKs) or using AWS API operations. Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests.

What does AWS offer to protect your data? (Choose TWO)

Access control AND Data encryption - AWS offers a lot of services and features that help you in protecting your data in the cloud. You can protect your data by encrypting it in transit and at rest. You can use Cloudtrail to log API and user activity, including who, what, and from where calls were made. You can also use the AWS Identity and Access Management (IAM) to control who can access or edit your data. You can also use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in Amazon S3.

​ Which features are included in the AWS Business Support Plan? (Choose TWO)

Access to the Infrastructure Event Management (IEM) feature for additional fee AND 24x7 access to customer service - All AWS customers - including Business support plan subscribers - have 24x7 access to customer service. The Business support plan also provides access to Infrastructure Event Management for additional fee. AWS Infrastructure Event Management is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps customers plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events.

What does the AWS Business support plan provide? (Choose two)

Access to the full set of Trusted Advisor checks AND AWS Support API - AWS recommend Business Support if you have production workloads on AWS and want 24x7 access to technical support and architectural guidance in the context of your specific use-cases. In addition to what is available with Basic Support, Business Support provides: 1- AWS Trusted Advisor 2- AWS Personal Health Dashboard 3- Enhanced Technical Support 4- Architecture Support 5- AWS Support API 6- Third-Party Software Support 7- Access to Proactive Support Programs

You have multiple standalone accounts and you want to decrease your AWS charges. What should you do?

Add the accounts to an organization and use Consolidated Billing - Consolidated billing has the following benefits: 1- One bill - You get one bill for multiple accounts. 2- Easy tracking - You can track each account's charges, and download the cost data in .csv format. 3- Combined usage - If you have multiple standalone accounts, your charges might decrease if you add the accounts to an organization. AWS combines usage from all accounts in the organization to qualify you for volume pricing discounts. 4- No extra fee - Consolidated billing is offered at no additional cost.

Which of the following is an example of a SaaS?

All of the infrastructure, operating system and software are provided by a third party. - This is an example of SaaS. With SaaS, the entire infrastructure, operating system, and software are being provided by a third party.

In this scenario, we have a NACL with the following rules. RulesTraffic Rule#1Allow SSH Rule#2Allow HTTP Rule#3Deny All Rule#4Allow All Which is true based on the rules within this NACL?

All traffic except SSH and HTTP will be denied - Based on the NACL rules, all traffic besides SSH and HTTP will be denied. There's an explicit deny for all traffic, but SSH and HTTP are allowed by earlier rules. Those rules will take precedence over the later rules.

​You decide to buy a reserved instance for a term of one year. Which option provides the largest total discount?

All up-front reservation - There are three payment options available when purchasing reserved instances: 1- No up-front 2- Partial up-front 3- All up-front. The general rule is: "the more you spend upfront, the more discounts you get." With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.

What is the AWS service that provides five times the performance of a standard MySQL database?

Amazon Aurora - Amazon Aurora is a fully-managed, MySQL and PostgreSQL-compatible relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications.

Which of the below options is true of Amazon Cloud Directory?

Amazon Cloud Directory allows organization of hierarchies of data across multiple dimensions - Amazon Cloud Directory is a cloud-native, highly scalable, high-performance directory service that provides web-based directories to make it easy for you to organize and manage all your application resources such as users, groups, locations, devices, and policies, and the rich relationships between them. Unlike existing traditional directory systems, Cloud Directory does not limit organizing directory objects in a single fixed hierarchy. In Cloud Directory, you can organize directory objects into multiple hierarchies to support multiple organizational pivots and relationships across directory information. For example, a directory of users may provide a hierarchical view based on reporting structure, location, and project affiliation. Similarly, a directory of devices may have multiple hierarchical views based on its manufacturer, current owner, and physical location. With Cloud Directory, you can create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries.

Which of the following services allows you to install and run your custom relational database software?

Amazon EC2 - If you need a full control over your database, AWS provides a wide range of Amazon EC2 instances—with different hardware characteristics—on which you can install and run your custom relational database software. Please note that if you use EC2 instead of RDS to run your relational database, you will be responsible for managing everything related to this database.

Which service helps you by collecting important metrics from a running EC2 instance?

Amazon CloudWatch - Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

You are working as a site reliability engineer (SRE), which of the following services helps monitor your applications?

Amazon CloudWatch - Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

Which AWS Service enables customers to set up an AWS billing alarm to inform them when their spending exceeds a certain threshold?

Amazon CloudWatch - Amazon CloudWatch is the AWS service that allows you to monitor the usage of your AWS resources. CloudWatch collects metrics, and allows you to create alarms based on those metrics. You can use CloudWatch to monitor your estimated AWS charges. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data. Billing metric data includes the estimated charges for every service in AWS that you use, in addition to the estimated overall total of your AWS charges. The alarm triggers when your account billing exceeds the threshold you specify.

A company is developing a mobile application and wants to allow users to use their Amazon, Apple, Facebook, or Google identities to authenticate to the application. Which AWS Service should the company use for this purpose?

Amazon Cognito - Amazon Cognito lets customers add user sign-up, sign-in, and access control to their web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.

You are facing a lot of problems with your current contact center. Which service provides a cloud-based contact center that can deliver a better service for your customers?

Amazon Connect - Amazon Connect is a cloud-based contact center solution. Amazon Connect makes it easy to set up and manage a customer contact center and provide reliable customer engagement at any scale. You can set up a contact center in just a few steps, add agents from anywhere, and start to engage with your customers right away. Amazon Connect provides rich metrics and real-time reporting that allow you to optimize contact routing. You can also resolve customer issues more efficiently by putting customers in touch with the right agents. Amazon Connect integrates with your existing systems and business applications to provide visibility and insight into all of your customer interactions.

Which of the following are examples of AWS-managed databases? (Choose two)

Amazon DocumentDB AND Amazon RDS for MySQL - AWS-managed databases are a database as a service offering from AWS where AWS manages the underlying hardware, storage, networking, backups, and patching. Users of AWS-managed databases simply connect to the database endpoint, and do not have to concern themselves with any aspects of managing the database. Examples of AWS-managed databases include: Amazon RDS ( Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server), Amazon DocumentDB, Amazon Redshift, and Amazon DynamoDB.

​ Which of the following AWS offerings are serverless services? (Choose TWO)

Amazon DynamoDB & AWS Lambda - AWS Lambda is a compute service that lets customers run code without provisioning or managing servers. AWS Lambda executes code only when needed and scales automatically, from a few requests per day to thousands per second. With DynamoDB, there are no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance.

If you want to run an ever-changing database in an Amazon EC2 Instance, what is the most recommended Amazon storage option?

Amazon EBS - Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. Amazon EBS is the recommended storage option when you run a database on an instance.

What is the primary storage service used by Amazon RDS DB instances?

Amazon EBS - DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage.

A company is planning to migrate a database with high read/write activity to AWS. What is the best storage option to use?

Amazon EBS - Databases require high read \ write performance and as such Amazon EBS is the correct answer. Amazon EBS volumes offer consistent and low-latency performance compared to other storage options. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application.

Both AWS and traditional IT distributors provide a wide range of virtual servers to meet their customers' requirements. What is the name of these virtual servers in AWS?

Amazon EC2 Instances - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2's simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon's proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.

Which of the following is not one of the seven core checks performed by AWS Trusted Advisor?

Amazon EC2 Reserved Instances Optimization/Unassociated Elastic IP Addresses

Amazon EBS volumes can be attached to which of the following compute resources?

Amazon EC2 instances - An Amazon EBS volume is a durable, block-level storage device that can be attached to a single EC2 instance. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for database software.

Which of the following is NOT a characteristic of Amazon Elastic Compute Cloud (Amazon EC2)?

Amazon EC2 is considered a Serverless Web Service - "Amazon EC2 is considered a Serverless Web Service" is not a characteristic of Amazon EC2 and thus is the correct choice. Serverless allows customers to shift more operational responsibilities to AWS. Serverless allows customers to build and run applications and services without thinking about servers. Serverless eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. Amazon EC2 is not a serverless service. EC2 instances are virtual servers in the cloud. Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

What factors determine how you are charged when using AWS Lambda? (Choose two)

Number of requests to your functions AND Compute time consumed - With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to execute.

You are running a financial services web application on AWS. The application uses a MySQL database to store the data. Which of the following AWS services would improve the performance of your application by allowing you to retrieve information from fast in-memory caches?

Amazon ElastiCache - Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps. The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.

You have a real-time IoT application that requires sub-millisecond latency. Which of the following services would you use?

Amazon ElastiCache for Redis - Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Built on open-source Redis and compatible with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data format to store your data. Your self-managed Redis applications can work seamlessly with ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT.

Which of the following is the most appropriate means for developers to store Docker container images in the AWS Cloud?

Amazon Elastic Container Registry (Amazon ECR) - Amazon Elastic Container Registry (Amazon ECR) is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (Amazon ECS), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications.

Amazon RDS supports multiple database engines to choose from. Which of the following is not one of them?

Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS is available on several database instance types - optimized for memory, performance or I/O - and provides you with six familiar database engines to choose from, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server.

Which AWS Service offers a filesystem that can be mounted concurrently from multiple EC2 instances?

Amazon Elastic File System - Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one EC2 instance.

A company needs to host a big data application on AWS. Which of the following AWS Storage services would they choose to automatically get high throughput to multiple compute nodes?

Amazon Elastic File System - Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones. With these capabilities, Amazon EFS is well suited to support a broad spectrum of use cases, including web serving and content management, enterprise applications, media and entertainment processing workflows, home directories, database backups, developer tools, container storage, and big data analytics workloads.

You are working as a web app developer. You are currently facing issues in media playback for mobile devices. The problem is that the current format of your media does not support playback on mobile devices. Which of the following AWS services can help you in this regard?

Amazon Elastic Transcoder - Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a highly scalable, easy-to-use, and cost-effective way for developers and businesses to convert (or transcode) media files from their source format into versions that will play back on devices like smartphones, tablets, and PCs.

A company has a large amount of data to be archived. What is the most cost-effective AWS storage service to use?

Amazon Glacier - Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. It is designed to deliver 99.999999999% durability, and provides comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

What considerations should be taken into account regarding storing data in Amazon Glacier?

Amazon Glacier doesn't provide immediate retrieval of data - Objects stored in Glacier take time to retrieve. You can pay for expedited retrieval, which will take several minutes or wait several hours for normal retrieval.

Which of the following is NOT a factor when estimating the costs of Amazon EC2? (Choose TWO)

Number of security groups AND Number of Hosted Zones

​ What is the AWS service that performs automated network assessments of Amazon EC2 instances to check for vulnerabilities?

Amazon Inspector - Explanation Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. Amazon Inspector allows you to create assessment templates to automate security vulnerability assessments throughout your development and deployment pipelines or for static production systems.

Which of the following AWS services can help you perform security analysis and regulatory compliance auditing? (Choose two)

Amazon Inspector AND AWS Config - With AWS Config, you can discover existing and deleted AWS resources, determine your overall compliance against rules, and dive into configuration details of a resource at any point in time. These capabilities enable compliance auditing, security analysis, resource change tracking, and troubleshooting. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. This allows you to make security testing a more regular occurrence as part of development and IT operations. Additional information: One of the most important services that performs security analysis and compliance auditing is AWS CloudTrail. AWS CloudTrail simplifies your compliance audits by automatically recording and storing event logs for actions made within your AWS account. With AWS CloudTrail, you can discover and troubleshoot security and operational issues by capturing a comprehensive history of changes that occurred in your AWS account within a specified period of time.

What does AMI stand for?

Amazon Machine Image

Which database should you use if your application requires joins or complex transactions?

Amazon RDS - If your database's schema cannot be denormalized, and your application requires joins or complex transactions, consider using a relational database such as Amazon RDS.

Select the services that are server-based: (Choose two)

Amazon RDS AND Amazon EMR - Server-based services include: Amazon EC2, Amazon RDS, Amazon Redshift and Amazon EMR. Serverless services include: AWS Lambda, AWS Fargate, Amazon SNS, Amazon SQS and Amazon DynamoDB.

A customer is planning to migrate their Microsoft SQL Server databases to AWS. Which AWS Services can the customer use to run their Microsoft SQL Server database on AWS? (Choose TWO)

Amazon RDS AND Amazon Elastic Compute Cloud - Amazon Web Services offers the flexibility to run Microsoft SQL Server as either a self-managed component inside of EC2, or as a managed service via Amazon RDS. Using SQL Server on Amazon EC2 gives customers complete control over the database, just like when it's installed on-premises. Amazon RDS is a fully managed service where AWS manages the maintenance, backups, and patching.

What is the AWS data warehouse service that supports a high level of query performance on large amounts of datasets?

Amazon Redshift - Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It allows you to run complex analytic queries against petabytes of structured data. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. Amazon Redshift manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.

Which AWS service allows you to build a data warehouse in the cloud?

Amazon Redshift - Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.

A company is developing an application that will leverage facial recognition to automate photo tagging. Which AWS Service should the company use for facial recognition?

Amazon Rekognition - Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, and faces in images. You can also search and compare faces. The Amazon Rekognition API enables you to quickly add sophisticated deep-learning-based visual search and image classification to your applications.

​ Which AWS Service can perform health checks on Amazon EC2 instances?

Amazon Route 53 - Amazon Route 53 provides highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other. Route 53 also offers health checks to monitor the health and performance of your application as well as your web servers and other resources. Route 53 can be configured to route traffic only to the healthy endpoints to achieve greater levels of fault tolerance in your applications.

​ Which AWS Service can be used to register a new domain name?

Amazon Route 53 - Route53 allows for registration of new domain names in AWS. Amazon Route 53 is a global service that provides a highly available and scalable Domain Name System (DNS) in the Cloud. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well. Route 53 also simplifies the hybrid cloud by providing recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS VPN.

Which service can be used to route end users to the nearest datacenter to reduce latency?

Amazon Route 53 - When you use multiple AWS Regions, you can reduce latency for your users by serving their requests from the AWS Region for which network latency is lowest. Amazon Route 53 latency-based routing lets you use Domain Name System (DNS) to route user requests to the AWS Region that will give your users the fastest response.

Your company is designing a new application that will store and retrieve photos and videos. Which of the following services should you recommend to be used as the underlying storage mechanism?

Amazon S3 - Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It's a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs. Amazon S3 can be used to: Common use cases of Amazon S3 include: Media Hosting Backup and Storage Hosting static websites Deliver content globally Hybrid cloud storage

A customer is seeking to store objects in their AWS environment and to make those objects downloadable over the internet. Which AWS Service can be used to accomplish this?

Amazon S3 - Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, any time, from anywhere on the internet. Amazon S3 assigns a URL for each object you upload. URLs are used to download the objects you want at any time. Amazon S3 is the only AWS service that provides object level storage.

In order to keep your data safe, you need to take a backup of your database regularly. What is the most cost-effective storage option that provides immediate retrieval of your backups?

Amazon S3 - Database backup is an important operation to consider for any database system. Taking backups not only allows the possibility to restore upon database failure but also enables recovery from data corruption. Amazon S3 provides highly durable and reliable storage for database backups while reducing costs. Data stored in Amazon S3 can be retrieved immediately when needed.

Which AWS Service offers volume discounts based on usage?

Amazon S3 - Some AWS services are priced in tiers, which specify unit costs for defined amounts of AWS usage. As your usage increases, your usage crosses thresholds into new pricing tiers that specify lower unit costs for additional usage in a month. For example, the more Amazon S3 capacity a customer uses, the lower the cost per unit volume. The current S3 pricing for the us-east-1 region is: 1st tier: $0.023 per GB / month for the first 50 TB stored 2nd tier: $0.022 per GB / month for the next 450 TB stored 3rd tier: $0.021 per GB / month for all storage consumed above 500 TB.

Which of the following AWS services scale automatically without your intervention? (Choose two)

Amazon S3 AND AWS Lambda - Amazon S3 and Amazon EFS are storage services that scale automatically in storage capacity without any intervention to meet increased demand. Also, AWS Lambda dynamically scales function execution in response to increased traffic.

A hospital needs to store medical records for a minimum period of 10 years. The records being stored will only need to be recalled if there is a legal or audit need, which is expected to be extremely infrequent. Which AWS Service offers the most cost-effective method for storing the records?

Amazon S3 Glacier - Amazon S3 Glacier is an extremely low-cost storage service that provides secure, durable, and flexible storage for data backup and archival. With Amazon S3 Glacier, customers can reliably store their data for as little as $0.004 per gigabyte per month. Amazon S3 Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so that they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.

An organization has a legacy application designed using monolithic-based architecture. Which AWS Service can be used to decouple the components of the application?

Amazon SQS - A monolithic application is designed to be self-contained; components of the application are interconnected and interdependent rather than loosely coupled as is the case with Microservices applications. With monolithic architectures, all processes are tightly-coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Adding or improving a monolithic application's features becomes more complex as the code base grows. This complexity limits experimentation and makes it difficult to implement new ideas. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. With a microservices architecture, an application is built as loosely-coupled components that run each application process as a service. These services communicate via a well-defined interface using lightweight APIs. Services are built for business capabilities and each service performs a single function. Because they are independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an application. Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

Which AWS service can be used to store and reliably deliver messages across distributed systems?

Amazon Simple Queue Service - Amazon SQS is a highly reliable, scalable message queuing service that enables asynchronous message-based communication between distributed components of an application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

A company is building an online cloud storage platform. They need a storage service that can scale capacity automatically, while minimizing cost. Which AWS storage service should the company use to meet these requirements?

Amazon Simple Storage Service - Amazon S3 is a storage service offered by AWS that offers highly redundant object storage to AWS customers. Amazon S3 allows customers to effectively store and retrieve any amount of data from anywhere. Amazon S3 offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.

Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose two)

Amazon Simple Storage Service AND Amazon DynamoDB - The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA).

​ Which AWS Service creates a virtual network in AWS?

Amazon VPC - Amazon Virtual Private Cloud (Amazon VPC) is the service that allows a customer to create a virtual network for their resources in an isolated section of the AWS cloud.

Which of the following is true regarding the AWS availability zones and edge locations?

An AWS Availability Zone is an isolated location within an AWS Region, however edge locations are located in multiple cities worldwide - In AWS, each Region has multiple, isolated locations known as Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. Edge locations may or may not exist within a region. They are located in most major cities around the world. Edge locations are specifically used by CloudFront (CDN) to distribute content to global users with low latency.

Which statement best describes the concept of an AWS region?

An AWS Region is a geographical location with a collection of Availability Zones - An AWS Region is a physical location in the world. Each region has multiple, isolated locations known as Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity. These Availability Zones offer you the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible to operate out of a single data center. Also, each AWS Region is designed to be completely isolated from the other AWS Regions. This achieves the greatest possible fault tolerance and stability.

What is AWS Lambda?

An AWS Service that allows customers to run code without provisioning or managing servers - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

What are the main differences between an IAM user and an IAM role? (Choose two)

An IAM user is uniquely associated with only one person, however a role is intended to be assumable by anyone who needs it AND An IAM user has permanent credentials associated with it, however a role has temporary credentials associated with it - An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user.

You have just set up your AWS environment and have created six IAM user accounts for the DevOps team. What is the AWS recommendation when granting permissions to those IAM accounts?

Apply the principle of least privilege - The principle of least privilege (PoLP, also known as the principle of minimal privilege or the principle of least authority) requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications. Any other privileges, such as installing new software, are blocked.

Your CTO has asked you to contact the AWS support using the chat feature to ask for guidance related to EBS. However, when you open the AWS support center you can't see a way to contact support via Chat. What should you do?

At a minimum, upgrade to Business support plan - Chat access to AWS Support Engineers is available at the Business and Enterprise level plans only.

Your company experiences fluctuations in traffic patterns to their e-commerce website when running flash sales. What service can help your company dynamically match the required compute capacity to handle spikes in traffic during flash sales?

Auto Scaling - AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, you maintain optimal application performance and availability, even when workloads are periodic, unpredictable, or continuously changing. When demand spikes, AWS Auto Scaling automatically increases the compute capacity, so you maintain performance. When demand subsides, AWS Auto Scaling automatically decreases the compute capacity, so you pay only for the resources you actually need.

Your application requirements for CPU and RAM change rapidly these days. Which service can be used to dynamically adjust those resources based on demand?

Auto Scaling - The AWS Auto Scaling service allows you to automatically provision new resources to meet demand and maintain performance. When demand decreases Auto Scaling shuts down unused resources to reduce costs.

What are the advantages of using Auto Scaling Groups for EC2 instances?

Auto Scaling Groups scales EC2 instances in multiple Availability Zones to increase application availability and fault tolerance - Amazon EC2 Auto Scaling offers the following benefits: 1- Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. Also, Amazon EC2 Auto Scaling enables you to take advantage of the safety and reliability of geographic redundancy by spanning Auto Scaling groups across multiple Availability Zones within a Region. When one Availability Zone becomes unhealthy or unavailable, Auto Scaling launches new instances in an unaffected Availability Zone. When the unhealthy Availability Zone returns to a healthy state, Auto Scaling automatically redistributes the application instances evenly across all of the designated Availability Zones. 2- Better availability. Amazon EC2 Auto Scaling helps ensure that your application always has the right amount of capacity to handle the current traffic demand. 3- Better cost management. Amazon EC2 Auto Scaling can dynamically increase and decrease capacity as needed. Because you pay for the EC2 instances you use, you save money by launching instances when they are needed and terminating them when they aren't.

Which of the following AWS services are free to use? (Choose two)

Auto-scaling AND CloudFormation - The AWS Auto Scaling service itself is free to use, you only pay for the resources that Auto-scaling provisions on your behalf (e.g. scaling EC2 capacity up). Additional information: AWS Auto Scaling is a service that can help you optimize your utilization and cost efficiencies when consuming AWS services so you only pay for the resources you actually need. When demand drops, AWS Auto Scaling will automatically remove any excess resource capacity so you avoid overspending. When demand increases, AWS Auto Scaling will automatically add capacity to maintain performance. AWS CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications. Additional information: AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion.

What are some key benefits of using AWS CloudFormation? (Choose two)

Automates the provisioning and updating of your infrastructure in a safe and controlled manner AND Allows you to model your entire infrastructure in a text file - The benefits of using AWS CloudFormation include: 1- CloudFormation allows you to model your entire infrastructure in a text file. This template becomes the single source of truth for your infrastructure. This helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting. 2- AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected. 3- Codifying your infrastructure allows you to treat your infrastructure as just code. You can author it with any code editor, check it into a version control system, and review the files with team members before deploying into production. 4- CloudFormation allows you to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

How are S3 storage classes rated?

Availability & Durability - Each S3 storage class is rated on its availability and durability.

Which of the following AWS support plans provides access to only the seven core Trusted Advisor checks?

Basic & Developer - Basic and Developer support plans provide access to only 7 core Trusted Advisor checks and guidance to provision your resources following best practices to increase performance and improve security. Business and Enterprise level Support Plans provide access to a full set of Trusted Advisor checks.

Which of the following AWS accounts do not have 24/7 access to Cloud Support Engineers?

Basic and Developer - Basic and Developer accounts do not have 24/7 access to Cloud Support Engineers.

Which of the following types of AWS accounts have access to AWS Personal Health Dashboard?

Basic, Developer, Business, and Enterprise - All of AWS accounts have access to AWS Personal Health Dashboard.

Which of the following types of AWS accounts have access to AWS Trusted Advisor?

Basic, Developer, Business, and Enterprise - All of AWS accounts have access to AWS Trusted Advisor.

Where can AWS customers find their historical billing information?

Billing and Cost Management console - To view your AWS bill, open the "Bills" pane of the Billing and Cost Management console, and then choose the month you want to view from the drop-down menu.

​ Which design principles relate to performance efficiency in AWS? (Choose TWO)

Build multi-region architectures to better serve global customers AND Use serverless architectures - There are five design principles for performance efficiency in the cloud: 1- Democratize advanced technologies 2- Go global in minutes 3- Use serverless architectures 4- Experiment more often 5- Mechanical sympathy

Which of the following are examples of the customer's responsibility to implement "security in the cloud"? (Choose TWO)

Building a schema for an application AND File system encryption - "Security in the Cloud" refers to the Customer's responsibility in the Shared Responsibility Model. Customers are responsible for items such as building application schema, monitoring server and application performance, configuring security groups and network ACLs, and encrypting their data. "Security of the Cloud" refers to the AWS' responsibility in the Shared Responsibility Model. AWS is responsible for items such as the physical security of the DC (data center), creating hypervisors, replacement of old disk drives, and patch management of the infrastructure. NOTE: For "Patch Management", AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

What are the access time options provided by Amazon Glacier that keep costs low yet suitable for varying retrieval needs? (Choose two)

Bulk AND Expedited - To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives that span a few minutes to several hours: (Access option : Data access time) 1- Expedited : 1-5 minutes 2- Standard : 3-5 hours 3- Bulk : 5-12 hours

What is the minimum level of AWS support that provides 24x7 access to technical support engineers via phone and chat?

Business Support - Each of the Business and Enterprise support plans provide 24x7 access to technical support engineers via phone, email, and chat. The Business Support Plan is less expensive than the Enterprise Support Plan. Therefore, the correct answer is Business.

A customer spent a lot of time configuring a newly deployed Amazon EC2 instance. After the workload increases, the customer decides to provision another EC2 instance with an identical configuration. How can the customer achieve this?

By creating an AMI from the old instance and launching a new instance from it - An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

How can you increase your application's fault-tolerance?

By deploying your application across multiple Availability Zones - The fault tolerance of an application involves its ability to recover gracefully from failures. Deploying the application resources across multiple availability zones will guarantee that even if one availability zone goes down, there will still be other availability zones to run the application efficiently.

How do ELBs improve the reliability of your application?

By ensuring that only healthy targets receive traffic - The reliability term encompasses the ability of a system to recover from infrastructure or service disruptions, and dynamically acquire computing resources to meet demand. ELBs continuously perform health checks on the registered targets (such as Amazon EC2 instances) and only routes traffic to the healthy ones. This increases the fault tolerance of your application and makes it more reliable.

What does the Amazon CloudFront service provide? (Choose two)

Caches common responses AND Delivers content to end users with low latency - Amazon CloudFront employs a global network of edge locations and regional edge caches that cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are serviced by the closest edge location. As a result, requests travel a short distance, improving performance for your end-users. To service requests for files not cached at the edge locations and the regional edge caches, Amazon CloudFront maintains persistent connections with your origin servers so that those files can be fetched from the origin servers as quickly as possible.

Amazon EC2 instances are conceptually very similar to traditional servers. However, using Amazon EC2 server instances in the same manner as traditional hardware server instances is only a starting point. What are the main benefits of using the AWS EC2 instances instead of traditional servers? (Choose two)

Can be scaled manually in a shorter period of time AND Improves Fault-Tolerance - "Improves Fault-Tolerance" is a correct answer. AWS has unique set of services that you can use to build fault-tolerant applications in the cloud. For example you can get improved fault tolerance by placing your compute instances behind an Elastic Load Balancer, as it can automatically balance traffic across multiple instances and multiple Availability Zones and ensure that only healthy Amazon EC2 instances receive traffic.

What are the characteristics of the AWS Cloud? Please select the best answer from the choices below.

Cost-efficient, flexible, fault-tolerant, and highly available - The AWS cloud is cost-efficient, flexible, fault-tolerant, and highly available. There are many characteristics to the AWS cloud and all are beneficial.

The AWS account administrator of your company has been fired. With the permissions granted to him as an administrator, he was able to create multiple IAM user accounts and access keys. Additionally, you are not sure whether he has access to the AWS root account or not. What should you do immediately to protect your AWS infrastructure? (Choose two)

Change the user name and the password and create MFA for the root account AND Put IP restriction on all Users' accounts - To protect your AWS infrastructure in this situation you should lock down your root user and all accounts that the administrator had access to. Here are some ways to do that: 1- Change the user name and the password of the root user account and all of the IAM accounts that the administrator has access to. 2- Rotate (change) all access keys for those accounts. 3- Enable MFA on those accounts. 4- Put IP restriction on all Users' accounts.

What should you do if you see resources, which you don't remember creating, in the AWS Management Console? (Choose TWO)

Change your AWS root account password and the passwords of any IAM users AND Open an investigation and delete any potentially compromised IAM users - If you suspect that your account has been compromised, or if you have received a notification from AWS that the account has been compromised, perform the following tasks: 1- Change your AWS root account password and the passwords of any IAM users. 2- Delete or rotate all root and AWS Identity and Access Management (IAM) access keys. 3- Delete any potentially compromised IAM users. 4- Delete any resources on your account you didn't create, such as EC2 instances and AMIs, EBS volumes and snapshots, and IAM users. 5- Respond to any notifications you received from AWS Support through the AWS Support Center.

Which of the following actions may reduce Amazon EBS costs? (Choose two)

Changing the type of the volume AND Deleting unnecessary snapshots - With Amazon EBS, it's important to keep in mind that you are paying for provisioned capacity and performance—even if the volume is unattached or has very low write activity. To optimize storage performance and costs for Amazon EBS, monitor volumes periodically to identify ones that are unattached or appear to be underutilized or overutilized, and adjust provisioning to match actual usage. When you want to reduce the costs of Amazon EBS consider the following: 1- Delete Unattached Amazon EBS Volumes 2- Resize or Change the EBS Volume Type 3- Delete Stale Amazon EBS Snapshots:

Which of the following Cloud Computing deployment models eliminates the need to run and maintain physical data centers?

Cloud There are three Cloud Computing Deployment Models: 1- Cloud: A cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. This Cloud Computing deployment model eliminates the need to run and maintain physical data centers. 2- Hybrid: A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud (On-premises data centers). 3- On-premises: Deploying resources on-premises, using virtualization and resource management tools, is sometimes called "private cloud". On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.

Ensuring compliance is a key priority for most businesses. Which of the following AWS services will help them achieve this?

CloudTrail - AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.

In this scenario, we want to set up a service that will allow for auditing IAM users. Which of the following services would be most appropriate?

CloudTrail - CloudTrail is a service that allows for auditing of IAM users within in AWS account.

A company has discovered that multiple S3 buckets were deleted, but it is unclear who deleted the buckets. Which of the following can the company use to determine the identity that deleted the buckets?

CloudTrail logs - AWS CloudTrail is a web service that records all AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller (who deleted the buckets in our case), the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. With CloudTrail, you can get a history of AWS API calls for your account, including API calls made using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

Which of the following enables you to monitor and collect log files from your Amazon EC2 instances?

CloudWatch Logs - You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention periods between 10 years and one day.

Why would an organization decide to use AWS over an on-premises data center? (Choose two)

Cost Savings AND Elastic resources - AWS continues to lower the cost of cloud computing for its customers. AWS recently lowered prices again for compute, storage, caching, and database services for all customers, making everything from web apps to big data on AWS even more cost-effective and widening the TCO gap with traditional infrastructure. Elasticity is a system's ability to monitor user demand and automatically increase and decrease deployed resources accordingly. Elasticity is one of the most important advantages of AWS. The purpose of elasticity is to match the resources allocated with actual amount of resources needed at any given point in time. This ensures that you are only paying for the resources you actually need.

What are the benefits of AWS Organizations? (Choose two)

Consolidate billing across multiple AWS accounts AND Control access to AWS services - Explanation AWS Organizations has five main benefits: 1) Centrally manage access polices across multiple AWS accounts. 2) Automate AWS account creation and management. 3) Control access to AWS services. 4) Consolidate billing across multiple AWS accounts. 5) Configure AWS services across multiple accounts. ** Control access to AWS services: AWS Organizations allows you to restrict what services and actions are allowed in your accounts. You can use Service Control Policies (SCPs) to apply permission guardrails on AWS Identity and Access Management (IAM) users and roles. For example, you can apply an SCP that restricts users in accounts in your organization from launching any resources in regions that you do not explicitly allow. ** Consolidate billing across multiple AWS accounts: You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.

What is CloudFront?

Content Delivery Network - CloudFront is a CDN (Content Delivery Network). It caches content locally to users to assure a better user experience.

​ What are AWS shared controls?

Controls that apply to both the infrastructure layer and customer layers - Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include: ** Patch Management - AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications. ** Configuration Management - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications. ** Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.

What is the main purpose of attaching security groups to an Amazon RDS instance?

Controls what IP addresses or EC2 instances can connect to your database instance - In Amazon RDS, security groups are used to control which IP addresses or EC2 instances can connect to your databases on a DB instance. When you first create a DB instance, its firewall prevents any database access except through rules specified by an associated security group.

An organization starts to perform a TCO analysis to be able to make an informed decision about migrating to AWS. Which of the following should be taken into consideration when performing this analysis? (Choose TWO)

Cooling and power consumption AND Labor and IT costs - Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. In practice, it is not as simple as just measuring potential hardware expense alongside utility pricing for compute and storage resources. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare direct and indirect costs of a product or a service. Cooling and power consumption, data center space, data center real estate and Labor IT cost are examples of the indirect costs of a physical data center and should be included in TCO analysis.

What is the main purpose of using Amazon SWF?

Coordinate tasks across distributed application components - Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks. Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service calls, human actions, and scripts. The coordination of tasks involves managing execution dependencies, scheduling, and concurrency in accordance with the logical flow of the application. With Amazon SWF, developers get full control over implementing processing steps and coordinating the tasks that drive them, without worrying about underlying complexities such as tracking their progress and keeping their state.

What does AWS offer to secure your network?

Customer-controlled encryption in transit - Data in transit (sometimes called data in motion) is a term used to describe data that is in transit through networks. Encrypting data in transit will add more security to your network by ensuring that data is unreadable as it travels from a service to another or from a network to another. The AWS Customer is responsible for encrypting his data either in transit or at rest. Note: Data at rest is the data that is not actively moving from device to device or network to network such as data stored on disks in AWS data centers.

Which DynamoDB feature can be used to reduce the latency of requests to a database from milliseconds to microseconds?

DAX - Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers performance improvements from milliseconds to microseconds - even at millions of requests per second. DAX adds in-memory acceleration to your DynamoDB tables without requiring you to manage cache invalidation, data population, or cluster management.

What is Route 53?

DNS service - Route 53 is a DNS service.

Which of the following has the greatest impact on cost? (Choose two)

Data Transfer Out AND Compute - The factors that have the greatest impact on cost include: Compute, Storage and Data Transfer Out. Their pricing differs according to the service you use.

Which of the following factors should be considered when determining the region in which AWS Resources will be deployed? (Choose TWO)

Data sovereignty AND cost - Per AWS Best Practices, proximity to your end users, regulatory compliance, data residency constraints, and cost are all factors you have to consider when choosing the most suitable AWS Region.

Which of the following is not an available EC2 Instance Type?

Database Optimized - Why is this correct? Database Optimized is not an instance type.

What best describes penetration testing?

Testing your network to find security vulnerabilities that an attacker could exploit - Penetration testing is the practice of testing a network or web application to find security vulnerabilities that an attacker could exploit.

A company needs to migrate their website from on-premises to AWS. Security is a major concern for them, so they need to host their website on hardware that is NOT shared with other AWS customers. Which of the following EC2 instance options meets this requirement?

Dedicated instances - Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level. In addition, Dedicated Instances that belong to AWS accounts that are linked to a single payer account are also physically isolated at the hardware level. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances.

AWS recommends some practices to help organizations avoid unexpected charges on their bill. Which of the following is NOT one of these practices?

Deleting unused AutoScaling launch configuration - "Deleting unused AutoScaling launch configuration" will not help, and thus is the correct choice. The AutoScaling launch configuration does not incur any charges. Thus, it will not make any difference whether it is deleted or not. AWS will charge the user once the AWS resource is allocated (even if it is not used). Thus, it is advised that once the user's work is completed he should: 1- Delete all Elastic Load Balancers. 2- Terminate all unused EC2 instances. 3- Delete the attached EBS volumes that he doesn't need. 4- Release any unused Elastic IPs.

You have developed a web application targeting a global audience. Which of the following will help you achieve the highest redundancy and fault tolerance?

Deploy the application in Multiple AZs in many AWS regions - Since you are targeting a global audience then you should use many AWS regions around the world. The deployment option that gives you the highest redundancy is to deploy the application in multiple AZs within many AWS regions. This redundancy will also increase the fault tolerance of the application because if there is an outage in an AZ, the other AZs can handle requests.

Which of the below options is a best practice for making your application on AWS highly available?

Deploy the application to at least two Availability Zones - Each AWS Region contains multiple distinct locations, or Availability Zones. Each Availability Zone is engineered to be independent from failures in other Availability Zones. Deploying your application to multiple Availability Zones will increase the availability of your application. If one availability zone encounters an issue, the other availability zones can still serve your application.

Which of the following type of account receives only email access to Cloud Support Associates during business hours?

Developer - Developer accounts only have email access to Cloud Support Associates during business hours.

What are the key design principles of the AWS Cloud? (Choose two)

Disposable resources instead of fixed servers AND Loose coupling - The AWS Cloud includes many design patterns and architectural options that you can apply to a wide variety of use cases. Some key design principles of the AWS Cloud include scalability, disposable resources, automation, loose coupling, managed services instead of servers, and flexible data storage options.

Which of the below options are use cases of the Amazon Route 53 service? (Choose TWO)

Domain Registration AND DNS configuration and management - Amazon Route 53 is AWS's domain and DNS management service. You can use it to register new domain names, as well as manage your Domain Name System (DNS) in the Cloud. Amazon Route 53 also simplifies the hybrid cloud by providing recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS VPN.

Which of the following strategies helps protect your AWS root account?

Don't create an access key unless you need to - Anyone who has root user access keys for your AWS account has unrestricted access to all the resources in your account, including billing information. If you don't already have an access key for your AWS account root user, don't create one unless you absolutely need to. If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly.

Amazon EBS volumes are automatically replicated within the same availability zone. What is the benefit of this?

Durability - Durability refers to the ability of a system to assure data is stored and data remains consistent in the system as long as it is not changed by legitimate access. This means that data should not become corrupted or disappear due to a system malfunction. Durability is used to measure the likelihood of data loss. For example, assume you have confidential data stored in your Laptop. If you make a copy of it and store it in a secure place, you have just improved the durability of that data. It is much less likely that all copies will be simultaneously destroyed. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. The replication of data makes EBS volumes 20 times more durable than typical commodity disk drives, which fail with an AFR (annual failure rate) of around 4%. For example, if you have 1,000 EBS volumes running for 1 year, you should expect 1 to 2 will have a failure.

A company's AWS workflow requires that it periodically perform large-scale image and video processing jobs. The customer is seeking to minimize cost and has stated that the amount of time it takes to process these jobs is not critical, but that cost minimization is the most important factor in designing the solution. Which EC2 instance class is best suited for this processing?

EC2 Spot Instances - A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable customers to request unused EC2 instances at steep discounts, customers can lower their Amazon EC2 costs significantly. Spot Instances run whenever capacity is available, and the maximum price per hour for the request exceeds the Spot price. The risk with Spot instances is that a running instance can be interrupted due to changes in demand and pricing for a specific class of Spot instances, as there is no guarantee of availability at any time. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks, as well as for workloads that are not time critical.

AWS has created a large number of Edge Locations as part of its Global Infrastructure. Which of the following is NOT a benefit of using Edge Locations?

Edge locations are used by CloudFront to distribute traffic across multiple instances to reduce latency - The AWS Edge Locations are not used to distribute traffic. Edge Locations are used in conjunction with the Cloudfront service to cache common responses and deliver content to end users with low latency. The AWS service that is used to distribute load is the AWS Elastic Load Balancing (ELB) service.

Which services does AWS offer for free? (Choose TWO)

Elastic Beanstalk AND AWS IAM - AWS Identity and Access Management is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your Users. There is no additional charge for AWS Elastic Beanstalk. You pay for AWS resources (e.g. EC2 instances or S3 buckets) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.

What are the AWS services\features that can help you maintain a highly available and fault-tolerant architecture in AWS? (Choose two)

Elastic Load Balancer AND Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon EC2 Auto Scaling helps you maintain application availability and fault tolerance through fleet management for EC2 instances, which detects and replaces unhealthy instances, and by scaling your Amazon EC2 capacity automatically according to conditions you define. You can use Amazon EC2 Auto Scaling to automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. Elastic Load Balancing provides an effective way to increase the availability and fault tolerance of a system. First ELB tries to discover the availability of your EC2 instances, it periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks. The load balancer routes user requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

Which of these cloud terminologies represents the benefit of the cloud to grow and shrink infrastructure based on demand?

Elasticity - Elasticity is the ability to grow and shrink infrastructure based on demand. Elasticity is a benefit of AWS.

How does Elasticity differ from Scalability?

Elasticity differs in its ability to not only scale-out but to shrink back down resources based on-demand as well. - Elasticity differs from scalability in its ability to scale out and shrink back down on demand. Scalability is the capacity to scale up in size to meet demand. This provides the capacity to meet demand and decrease operational costs to what is only necessary.

Which of the following are advantages of using AWS as a cloud computing provider? (Choose two)

Eliminates the need to guess on infrastructure capacity needs AND Enables customers to trade their capital expenses for operational expenses - 1- Trade capital for variable expense 2- Benefit from massive economies of scale 3- Stop guessing capacity 4- Increase speed and agility 5- Stop spending money on running and maintaining data centers 6- Go global in minutes

Which of the following are part of the seven design principles for security in the cloud? (Choose two)

Enable real-time traceability AND Use IAM roles to grant temporary access instead of long-term credentials - There are seven design principles for security in the cloud: 1- Implement a strong identity foundation 2- Enable traceability 3- Apply security at all layers 4- Automate security best practices 5- Protect data in transit and at rest 6- Keep people away from data 7- Prepare for security events

Which of the following are use cases for Amazon EMR? (Choose two)

Enables you to analyze large amounts of datasets in a timely manner AND Enables you to easily run and scale Apache Spark, Hadoop,and other Big Data frameworks - Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). Amazon EMR is ideal for problems that necessitate the fast and efficient processing of large amounts of data. EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming set-up, management or tuning of Hadoop clusters or the compute capacity upon which they sit.

Which of the following activities may help reduce your AWS monthly costs?

Enabling Amazon EC2 Auto Scaling for all of your workloads - Amazon EC2 Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost.

Which of the following can help secure your sensitive data in Amazon S3? (Choose two)

Encrypt the data prior to uploading it AND Enable S3 Encryption - Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon data centers). You can protect data in transit by using SSL or by using client-side encryption. Also, You have the following options of protecting data at rest in Amazon S3. 1- Use Server-Side Encryption - You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects. 2- Use Client-Side Encryption - You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

Which of the following AWS Support Plans gives you 24/7 access to Cloud Support Engineers via email & phone? (Choose two)

Enterprise AND Business - For Technical Support, each of the Business and the Enterprise support plans provides 24x7 phone, email, and chat access to Support Engineers.

Which support plan includes AWS Support Concierge Service?

Enterprise Support - The AWS Support Concierge Service is available only for the Enterprise plan subscribers.

Which AWS Support Plan gives customers access to a "Well-Architected Review" for business critical workloads?

Enterprise Support - The only AWS Support plan that gives customers access to a "Well-Architected Review" delivered by AWS Solution Architects is the Enterprise support plan. This review provides guidance and best practices to help customers design reliable, secure, efficient, and cost-effective systems in the cloud.

According to the AWS shared responsibility model, what are the controls that customers fully inherit from AWS? (Choose two)

Environmental controls AND Data center security controls - AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.

What are the capabilities of AWS X-Ray? (Choose TWO)

Facilitates tracking of user requests to identify application issues AND Helps improve application performance - Benefits of AWS X-Ray include: 1- Review request behavior: AWS X-Ray traces user requests as they travel through your entire application. It aggregates the data generated by the individual services and resources that make up your application, providing you an end-to-end view of how your application is performing. 2- Discover application issues: With AWS X-Ray, you can glean insights into how your application is performing and discover root causes. With X-Ray's tracing features, you can follow request paths to pinpoint where in your application and what is causing performance issues. 3- Improve application performance AWS X-Ray helps you identify performance bottlenecks. X-Ray's service maps let you see relationships between services and resources in your application in real time. You can easily detect where high latencies are occurring, visualize node and edge latency distribution for services, and then drill down into the specific services and paths impacting application performance.

A company is using EC2 Instances to run their e-commerce site on the AWS platform. If the site becomes unavailable, the company will lose a significant amount of money for each minute the site is unavailable. Which design principle should the company use to minimize the risk of an outage?

Fault Tolerance - A system that is designed to be fault tolerant can recover gracefully from EC2 instance failures. Amazon Web Services gives customers access to a vast amount of IT infrastructure-compute, storage, and communications-that they can allocate automatically (or nearly automatically) to account for almost any kind of failure.

Which feature enables users to sign in to their AWS accounts with their existing corporate credentials?

Federation - With Federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application. AWS offers multiple options for federating your identities in AWS: 1- AWS Identity and Access Management (IAM): You can use AWS Identity and Access Management (IAM) to enable users to sign in to their AWS accounts with their existing corporate credentials. 2- AWS Directory Service: AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, uses secure Windows trusts to enable users to sign in to the AWS Management Console, AWS Command Line Interface (CLI), and Windows applications running on AWS using their existing corporate Microsoft Active Directory credentials.

Which of these components does a security group represent for an EC2 instance?

Firewall - Why is this correct? A security group is a firewall on the instance level.

Which of the following was not a business challenge before the cloud?

Fully-customizable infrastructure with on-premise data centers - Before the cloud, on-premises data centers were fully customizable. This was not a business challenge. This was a benefit of an on-premises data center.

What is the name of the DynamoDB replication capability that provides fast read \ write performance for globally deployed applications?

Global Tables - DynamoDB global tables are ideal for massively scaled applications with globally dispersed users. Global tables provide automatic replication to AWS Regions world-wide. They enable you to deliver low-latency data access to your users no matter where they are located.

Your application has recently experienced significant global growth, and international users are complaining of high latency. What is the AWS characteristic that can help improve your international users' experience?

Global reach - With AWS, you can deploy your application in multiple regions around the world. The user will be redirected to the Region that provides the lowest possible latency and the highest performance. You can also use the CloudFront service that uses edge locations (which are located in most of the major cities across the world) to deliver content with low latency and high performance to your global users.

What does AWS Cost Explorer provide to help manage your AWS spend?

Highly accurate cost forecasts for up to 12 months ahead - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Cost Explorer's cost forecast capabilities use machine learning to learn each customer's historical spend patterns and use that information to forecast expected costs. Cost Explorer's forecasting enables you to get a better idea of what your costs and usage may look like in the future, so that you can plan ahead. Forecasting capabilities have been enhanced to support twelve month forecasts (previously forecasts were limited to three months) for multiple cost metrics, including unblended and amortized costs.

What does the term "Economies of scale" mean?

It means that AWS will continuously lower costs as it grows - By using cloud computing, you can achieve a lower variable cost than you would get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Which of the following are use cases for Amazon S3? (Choose two)

Hosting static websites AND A media store for the CloudFront service - You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts. To host a static website, you configure an Amazon S3 bucket for website hosting, allow public read access, and then upload your website content to the bucket. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting. Amazon Web Services (AWS) also has resources for hosting dynamic websites such as Amazon EC2. Amazon S3 is an excellent storage facility for your media assets. It is infinitely scalable, has built-in redundancy, and is available to you on a pay-as-you-go basis. For example, if you want to deliver or stream video files to your global users, all you need to do is to put your content in an S3 bucket and create a CloudFront distribution that points to the bucket. Your user's video player will use CloudFront URLs to request the video file. The request will be directed to the best edge location, based on the user's location. The Amazon Cloudfront Content Delivery Network (CDN) will serve the video from its cache, fetching it from the S3 bucket if it has not already been cached. The CDN caches content at the edge locations for consistent, low-latency, high-throughput video delivery.

Which of the following is a cloud computing deployment model that connects infrastructure and applications between cloud-based resources and existing resources not located in the cloud?

Hybrid - A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend, and grow, an organization's infrastructure into the cloud while connecting cloud resources to the internal system.

Which of these types of a cloud is an example of using on-premises data centers in combination with AWS?

Hybrid Cloud - An example of a Hybrid Cloud is using a combination of on-premises datacenter and answer.

What can you use to assign permissions to an IAM user?

IAM Policy - The policy is a JSON document that consists of: 1- Actions: what actions you will allow. Each AWS service has its own set of actions. 2- Resources: which resources you allow the action on. 3- Effect: what will be the effect when the user requests access—either allow or deny. 4- Conditions - which conditions must be present for the policy to take effect. For example, you might allow access only to the specific S3 buckets if the user is connecting from a specific IP range or has used multi-factor authentication at login. Note: Permissions are granted to IAM identities (users, groups, and roles) to determine whether they are authorized to perform an action or not.

Which IAM entity can best be used to grant temporary access to your AWS resources?

IAM Roles - An IAM role is an IAM entity that defines a set of permissions for making AWS service requests. IAM roles are not associated with a specific user or group. Instead, trusted entities assume roles, such as IAM users, applications, or AWS services such as EC2. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app. Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources. For these scenarios, you can delegate temporary access to AWS resources using an IAM role.

​Which of the following are types of AWS Identity and Access Management (IAM) identities? (Choose TWO)

IAM Roles AND IAM Users - Identities on AWS include users (or groups) and roles. Customers create these identities on AWS to manage access to AWS resources and determine the actions that each identity can perform on those resources.

What does Amazon ElastiCache provide? (Choose two)

Improves web applications' performance AND Provides an in-memory data store service - Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory data store, instead of relying entirely on slower disk-based databases. Querying a database is always slower and more expensive than locating a copy of that data in a cache. By caching (storing) common database query results, you can quickly retrieve the data multiple times without having to re-execute the query.

Which of the following is NOT a factor when estimating the cost of Amazon CloudFront?

Inbound traffic - Amazon CloudFront charges are based on the data transfer out of AWS and requests used to deliver content to your customers. There are no upfront payments or fixed platform fees, no long-term commitments, no premiums for dynamic content, and no requirements for professional services to get started. There is no charge for data transferred from AWS services such as Amazon S3 or Elastic Load Balancing. When you begin to estimate the cost of Amazon CloudFront, consider the following: - Traffic distribution: Data transfer and request pricing varies across geographic regions, and pricing is based on the edge location through which your content is served. - Requests: The number and type of requests (HTTP or HTTPS) made and the geographic region in which the requests are made. - Data transfer out: The amount of data transferred out of your Amazon CloudFront edge locations.

Which of the following is a benefit of running an application in multiple Availability Zones?

Increases the availability of your application - Placing instances that run your application in multiple Availability Zones improves the fault tolerance of your application. If one Availability Zone experiences an outage, traffic is routed to another Availability Zone, and this will increase the availability of your application.

​ When running a workload in AWS, the customer is NOT responsible for: (Select two)

Infrastructure security AND Data center operations - AWS is responsible for the infrastructure security and all data center operations such as racking, stacking, and powering servers, so customers can focus on revenue generating activities rather than on IT infrastructure.

Which of these is not a general category of AWS services?

Internet

​ What is the benefit of using an API to access AWS Services?

It allows for programmatic management of AWS resources - The AWS Application Programming Interface (API) allows customers to work with various AWS services programmatically.

What is the main benefit of the AWS Storage Gateway service?

It allows one to integrate on premises IT environments with Cloud Storage - AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure.

Each AWS Region is composed of multiple Availability Zones. Which of the following best describes what an Availability Zone is?

It is a distinct location within a region that is insulated from failures in other Availability Zones - Availability Zones are distinct locations within a region that are insulated from failures in other Availability Zones.

Which of these is not a feature of AWS Organizations?

It provides the ability to centrally manage all AWS accounts from any account within the organization. - AWS Organizations do not provide the ability to centrally manage all other AWS accounts from any AWS account within the organization. AWS Organizations only allows one account to be the master account of an organization, and that account is where the entire organization is managed from.

What does AWS Service Catalog provide?

It simplifies organizing and governing commonly deployed IT services - AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

Which of the following are true regarding the languages that are supported on AWS Lambda? (Choose TWO)

Lambda natively supports a number of programming languages such as ​Node.js, Python, and Java AND Lambda can support any programming language using an API

A key practice when designing solutions on AWS is to minimize dependencies between components so that the failure of a single component does not impact other components. What is this practice called?

Loosely coupling - The concept of loosely coupling an application refers to breaking the application into components that perform aspects of a task independently of one another. Using this design concept minimizes the risk that a change or a failure in one component will impact other components.

What is the AWS IAM feature that provides an additional layer of security on top of user-name and password authentication?

MFA - AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

For Amazon RDS databases, What does AWS perform on your behalf? (Choose two)

Managing the operating system AND Database setup - In relation to Amazon RDS databases: AWS is responsible for: 1- Managing the underlying infrastructure and foundation services. 2- Managing the operating system. 3- Database setup. 4- Patching and backups. The customer is still responsible for: 1- Protecting the data stored in his databases (through encryption and IAM access control). 2- Managing the database settings that are specific to his application. 3- Building the relational schema. 4- Network traffic protection.

Why does every AWS Region contain multiple Availability Zones?

Multiple Availability Zones allows you to build resilient and highly available architectures. - Resilience is the ability of an architecture to continue providing the same quality of service even if some of its resources become inaccessible. Deploying your resources across multiple Availability Zones offer you the ability to operate production applications and databases that are more resilient, highly available, and scalable than would be possible from a single data center.

Which of the following is used to control network traffic in AWS? (Choose two)

Network Access Control Lists (NACLs) AND Security Groups - You can control network traffic in AWS by configuring security groups, network access control lists, and route tables. 1- Security groups: Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level. 2- Network access control lists (ACLs): Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level. 3- Route Tables: A route table contains a set of rules, called routes, that are used to determine where network traffic is directed.

Which computer component provides a connection to the internet?

Network Card - Why is this correct? A network card provides an internet connection for a computer.

What are the benefits of using DynamoDB? (Choose TWO)

Offers extremely low (single-digit millisecond) latency AND Automatically scales to meet required throughout capacity - Benefits of DynamoDB include: 1- Performance at scale: DynamoDB supports some of the world's largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. 2- Serverless: With DynamoDB, there are no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. 3- Highly available: Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities.

You are planning to launch an advertising campaign over the coming weekend to promote a new digital product. It is expected that there will be heavy spikes in load during the campaign period. You need additional compute resources to handle the additional load. What is the most cost-effective EC2 instance purchasing option for this job?

On-Demand Instances - On Demand instances would help provision any extra capacity that the application may need without any interruptions.

What is the most cost-effective purchasing option for running a set of EC2 instances that must always be available for a period of two months?

On-Demand Instances - The most cost-effective option for this scenario is to use On-Demand Instances.

In this scenario, we have an increase in traffic on a holiday sale. What EC2 purchasing option should we use to acquire the resources to handle the traffic?

On-Demand: Why is this correct? Purchasing an On-Demand instance would provide us instances to handle that short duration of time we need the resources, and then we could terminate the instances when the traffic declines after the holiday sale.

Which of these is not a basic component of the cloud that is accessible to a cloud customer?

On-premise data center - On-premise data centers are not one of the basic components that make a cloud that is accessible to a cloud customer.

Which of the following is IAM commonly used to manage?

Password Policies, API Access keys, Users & Groups, Access Policies

​ For managed services like Amazon DynamoDB, which of the below is AWS responsible for? (Choose two)

Patching the database software AND Operating system maintenance - AWS has increased responsibilities for its managed services. Examples of managed services include Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, and Amazon WorkSpaces. These services provide the scalability and flexibility of cloud-based resources with less operational overhead because AWS handle basic security tasks like guest operating system (OS) and database patching, installing antivirus software, backup, and disaster recovery. For most managed services, you only configure logical access controls and protect account credentials, while maintaining control and responsibility of any personal data.

One of the major advantages of using AWS is cost savings. Which of the below options is an example of the cost savings offered by AWS?

Per-second instance billing - With per-second billing, customers pay for only what they use. It takes the cost of unused minutes and seconds in an hour off of the bill, so they can focus on improving their applications instead of maximizing usage to the hour. Especially, if a customer manages instances running for irregular periods of time, such as dev/testing, data processing, analytics, batch processing, and gaming applications, can benefit. EC2 usage is billed on one-second increments, with a minimum of 60 seconds. Similarly, provisioned storage for EBS volumes will be billed per-second increments, with a 60 second minimum. Per-second billing also applies to several other AWS services, including Amazon RDS, Amazon EMR, and AWS Batch.

Under the Shared Responsibility Model, which of the following controls do customers fully inherit from AWS? (Choose two)

Physical controls AND Environmental controls - AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS. As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls. As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families.

What type of cloud is used by traditional on-premise methods?

Private Cloud - The private cloud is where an organization manages all of its infrastructure, operating system, and software. This is the traditional method of on-premise data centers.

Which of the below are responsibilities of the customer when using Amazon EC2? (Choose TWO)

Protecting sensitive data AND Installing and configuring third-party software - Amazon EC2 requires the customer to perform all of the necessary security configuration and management tasks. When customers deploy Amazon EC2 instances, they are responsible for management of custom Amazon Machine Images, management of the guest operating systems (including updates and security patches), securing application access and data, installing and configuring third-party applications or utilities, and the configuration of the AWS-provided firewall (called a security group) on each instance.

According to the AWS Shared responsibility model, which of the following are the responsibility of the customer? (Choose two)

Protecting the confidentiality of data in transit in Amazon S3 AND Patching applications installed on Amazon EC2 - Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in AWS data centers). The AWS customer is responsible for protecting their data either at rest or in transit for all services (including S3). Patch management is a shared control between AWS and the customer. AWS is responsible for patching the underlying hosts, updating the firmware, and fixing flaws within the infrastructure, but customers are responsible for patching their guest operating system and applications.

What are the benefits of the AWS Marketplace service? (Choose two)

Provides flexible pricing options that suit most customer needs AND Protect customers by performing periodic security checks on listed products - AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, and immediately start using the software and services that customers need to build solutions and run their businesses. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps. AWS Marketplace is designed for Independent Software Vendors (ISVs), Value-Added Resellers (VARs), and Systems Integrators (SIs) who have software products they want to offer to customers in the cloud. Partners use AWS Marketplace to be up and running in days and offer their software products to customers around the world. AWS Marketplace provides value to buyers in several ways: 1- It simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. Flexible pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL. 2- Customers can quickly launch pre-configured software with just a few clicks, and choose software solutions in AMI and SaaS formats, as well as other formats. 3- It ensures that products are scanned periodically for known vulnerabilities, malware, default passwords, and other security-related concerns.

Which of the following are important design principles you should adopt when designing systems on AWS? (Choose two)

Remove single points of failure AND Automate wherever possible - A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. You can remove single points of failure by assuming everything will fail and designing your architecture to automatically detect react to failures. For example, configuring and deploying an auto-scaling group of EC2 instances will ensure that if one or more of the instances crashes, Auto-scaling will automatically replace them with new instances. You should also introduce redundancy to remove single points of failure, by deploying your application across multiple Availability Zones. If one Availability Zone goes down for any reason, the other Availability Zones can serve requests.

An organization needs to build a financial application that requires support for ACID transactions. Which AWS database service is most appropriate in this case?

RDS - In computer science, ACID (Atomicity, Consistency, Isolation, and Durability) is a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc. Amazon RDS is a fully-managed relational database service. It is a highly available and highly consistent database that supports ACID transactions. Basically, a transaction is one or more add, update, delete, or modify change to the database that must all be completed successfully or none of the steps should be executed. Transactional databases are useful when data integrity is important. If one of the steps in the transaction fail, then the steps must be rolled back to the state before any change was made to the database. An example of when you would need a transaction is when you make a banking transaction to move money from one account to another. If you successfully remove money from account A, but fail to add money to account B, then the transaction fails and the transaction must be rolled back so that the money is not taken from account A.

Which of the following is a feature of Amazon RDS that performs automatic failover when the primary database fails to respond?

RDS Multi-AZ - When you enable Multi-AZ, Amazon Relational Database Service (Amazon RDS) maintains a redundant and consistent standby copy of your data. If you encounter problems with the primary copy, Amazon RDS automatically switches to the standby copy (or to a read replica in the case of Amazon Aurora) to provide continued availability to the data. The two copies are maintained in different Availability Zones (AZs), hence the name "Multi-AZ." Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Having separate Availability Zones greatly reduces the likelihood that both copies will concurrently be affected by most types of disturbances.

Which of the following Amazon RDS features facilitates offloading of database read activity?

Read Replicas - You can reduce the load on your source DB Instance by routing read queries from your applications to one or more read replicas. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

AWS provides excellent cloud-based disaster recovery services utilizing their multiple _____________ .

Regions - Businesses are using the AWS cloud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. The AWS cloud supports many popular disaster recovery (DR) architectures from "pilot light" environments that may be suitable for small customer workload data center failures to "hot standby" environments that enable rapid failover at scale. With data centers in Regions all around the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data.

Which of the following are examples of storage?

Remote Hard Disk iCloud Local Hard Disk Google Drive Dropbox

Which of the following is not a pricing factor for EC2?

Request pricing - Request pricing is not a pricing factor of EC2. It is a pricing factor of Amazon S3.

A company is seeking to better secure its AWS account from unauthorized access. Which of the below options can the customer use to achieve this goal?

Require Multi-Factor Authentication (MFA) for all IAM User access - For increased security, AWS recommends that you configure multi-factor authentication (MFA) to help protect your AWS resources. MFA adds extra security because it requires users to provide unique authentication from an AWS supported MFA mechanism in addition to their regular sign-in credentials when they access AWS websites or services. You can also enforce MFA authentication for AWS service APIs via AWS Identity and Access Management (IAM) policies. This provides an extra layer of security over powerful API operations that you designate, such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3.

A company is migrating a web application to AWS. The application's compute capacity is continually utilized throughout the year. Which of the below options offers the company the most cost-effective solution?

Reserved Instances - Amazon EC2 Reserved Instances provide a significant discount compared to On-Demand pricing for customers that can commit to using EC2 over a 1- or 3-year term to reduce their total computing costs. Depending on the term of commitment and the amount paid up-front, discounts as high as 75% can be attained vs. On-Demand pricing.

What is one benefit and one drawback of buying a reserved EC2 instance? (Select TWO)

Reserved instances require at least a one-year pricing commitment AND Reserved instances provide a significant discount compared to on-demand instances - Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing. Reserved instances can be purchased for a 1-year or 3-year term so you are committing to pay for them throughout this time period even if you don't use them.

What are the benefits of using the Amazon Relational Database Service? (Choose two)

Resizable compute capacity AND Lower administrative burden - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable Compute (and\or Storage) capacity while automating time-consuming administration tasks such as hardware provisioning, operating system maintenance, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

What is the AWS' recommendation regarding access keys?

Rotate them regularly - AWS recommends that you change your own passwords and access keys regularly, and make sure that all IAM users in your account do as well. That way, if a password or access key is compromised without your knowledge, you limit how long the credentials can be used to access your resources.

Which AWS service allows you to configure a DNS record set?

Route 53 - Route 53 can be used to configure a DNS record set.

In this scenario, we want to purchase a domain name with AWS. Which AWS service would we use?

Route 53 - Route 53 is the AWS service used to purchase domains.

According to best practices, which of the below options is best suited for processing a large number of binary files?

Running EC2 instances in parallel - One of the core principles of the AWS Well-Architected Framework is that of scaling horizontally. Horizontal scaling means adding several smaller instances when workloads increase, instead of adding additional CPU, memory, or disk capacity to a single instance. In the syntax of this question, running several EC2 instances in parallel achieves horizontal scalability and is the correct answer. AWS recommends that customers should scale resources horizontally to increase aggregate system availability. Replacing a large resource with multiple small resources in parallel will reduce the impact of a single failure on the overall system. For example, if a customer wants to convert a large number of binary files to text files or transcode a large number of video files to another format, it is recommended that they use multiple EC2 instances in parallel instead of using one large instance.

Which service stores log events for CloudTrail?

S3 - S3 is the service in which CloudTrail logs events. It logs the events as an S3 object.

For some services, AWS automatically replicates data across multiple AZs to provide fault tolerance in the event of a server failure or Availability Zone outage. Select TWO services that automatically replicate data across AZs.

S3 AND DynamoDB - For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each separated by miles across an AWS Region. This means your data is available when needed and protected against AZ failures, errors, and threats. All of your data in DynamoDB is stored on solid state disks (SSDs) and is automatically replicated across multiple Availability Zones within an AWS region, providing built-in high availability and data durability.

What is the AWS S3 storage class that has the lowest availability rating?

S3 One Zone-IA - S3 One Zone-IA has the lowest availability rating: 99.5%.

Which of the following is not an AWS reservation model?

S3 Reserved Capacity - There are no reservations in S3. You pay for what you use. While the cloud is well-suited for running variable workloads and rapid deployments, many cloud-based workloads display a more predictable pattern. For these stable applications, organizations can achieve significant cost savings by taking advantage of the available reservation models such as EC2 reserved instances, RDS reserved instances, ElastiCache Reserved Nodes, DynamoDB Reserved Capacity and Redshift Reserved Nodes.

Which of the following storage classes is most appropriate to be used for a popular e-commerce website with stable access patterns?

S3 Standard - S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.

What is the AWS service\feature that takes advantage of Amazon CloudFront's globally distributed edge locations to transfer files to S3 with higher upload speeds?

S3 Transfer Acceleration - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

In this scenario, we want to automate emails and SMS messages for events taking place in an AWS account. Which of the following services is the most appropriate?

SNS - SNS is used to automate emails and SMS messages triggered by events taking place in an AWS account.

What AWS service is triggered to send a message by a CloudWatch Alarm?

SNS: CloudWatch Alarms trigger SNS to send a message.

What are some key advantages of AWS security? (Choose two)

Save money AND Helps organizations to meet their compliance requirements - The Benefits of AWS Security include : 1- Keep Your Data Safe: The AWS infrastructure puts strong safeguards in place to help protect your privacy. All data is stored in highly secure AWS data centers. 2- Meet Compliance Requirements: AWS manages dozens of compliance programs in its infrastructure. This means that segments of your compliance have already been completed. 3- Save Money: Cut costs by using AWS data centers. Maintain the highest standard of security without having to manage your own facility. 4- Scale Quickly: Security scales with your AWS Cloud usage. No matter the size of your business, the AWS infrastructure is designed to keep your data safe.

A company is planning to use a number of Amazon EC2 instances for at least one year. Which payment model does AWS make available to reduce their overall EC2 costs?

Save when you reserve - For Customers that can commit to using EC2 over a 1 or 3-year term, it is better to purchase EC2 Reserved Instances. Reserved Instances provide a significant discount (up to 75%) compared to On-Demand instance pricing.

Which Amazon EC2 Reserved Instance type is ideal for an application that runs 3 hours a day, 5 days a week?

Scheduled RIs - Scheduled RIs are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

Which of the following aspects of security are managed by AWS? (Choose two)

Securing global physical infrastructure AND Hardware patching - AWS is continuously innovating the design and systems of its data centers to protect them from man-made and natural risks. For example, at the first layer of security, AWS provides a number of security features depending on the location, such as security guards, fencing, security feeds, intrusion detection technology, and other security measures. According to the Shared Responsibility model, patching of the underlying hardware is the AWS' responsibility. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

Which of the following is the responsibility of AWS according to the Shared Security Model?

Securing regions and edge locations - According to the Shared Security Model, AWS' responsibility is the Security of the Cloud. AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Which of the following AWS security features is associated with an EC2 instance and functions to filter incoming traffic requests?

Security Groups - Security Groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.

You have been tasked with auditing the security of your VPC. As part of this process, you need to start by analyzing what traffic is allowed to and from various EC2 instances. What two parts of the VPC do you need to check to accomplish this task?

Security Groups and NACLs - Security Groups and NACLs are the two parts of the VPC Security Layer. Security Groups are a firewall at the instance layer, and NACLs are a firewall at the subnet layer.

In this scenario, we want to control service usage across multiple AWS accounts using AWS Organizations. Which of the following would be used to accomplish this task?

Service Control Policies - Using AWS Organizations allows for the creation of SCPs (Service Control Policies) to manage service usage across multiple AWS accounts.

AWS provides the ability to create backups of any block-level Amazon EC2 volume. What is the name of this backup?

Snapshot - The question asks for creating backups for any block-level Amazon EC2 volume. Amazon EC2 block-level volumes are either EBS volumes or instance store volumes. You can backup EBS volumes by creating EBS snapshots. Data in instance store volumes are not persistent and cannot be used to backup data. In order to backup instance store volumes you should also use EBS.

Which of the following affect Amazon EBS costs? (Choose two)

Snapshots AND Volume types - When you want to estimate the costs of Amazon EBS you need to consider the following: 1- Volume types. 2- Input/output operations per second(IOPS). 3- Snapshots. 4- Data Transfer.

A customer is planning to move billions of images and videos to be stored on Amazon S3. The customer has approximately one Exabyte of data to move. Which of the following AWS Services is the best choice to transfer the data to AWS?

Snowmobile - AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100 Petabytes (PB) per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. At exabyte scale, transferring data with Snowmobile is more secure, fast and cost effective.

A company has developed a media transcoding application in AWS. The application is designed to recover quickly from hardware failures. Which one of the following types of instance would be the most cost-effective choice to use?

Spot Instances - The question stated that the application is designed to recover quickly from failures, therefore it can handle any interruption may occur with the instance. Hence, we can use the Spot instances for this application. Spot instances provide a discount (up to 90%) off the On-Demand price. The Spot price is determined by long-term trends in supply and demand for EC2 spare capacity. If the Spot price exceeds the maximum price you specify for a given instance or if capacity is no longer available, your instance will automatically be interrupted. Spot Instances are the most cost-effective choice if you are flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

Which of the following are Amazon EC2 reserved instances types? (Select two)

Standard AND Convertible - There are three types of EC2 reserved instances(RIs) that you can choose from based on your applications needs: 1- Standard RIs: These provide the most significant discount (up to 75% off On-Demand) and are best suited for steady-state usage. 2- Convertible RIs: These provide a discount (up to 54% off On-Demand) and the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value. Like Standard RIs, Convertible RIs are best suited for steady-state usage. 3- Scheduled RIs: These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

You are using several on-demand EC2 Instances to run your development environment. What is the best way to reduce your charges when these instances are not in use?

Stopping the instances - AWS doesn't charge usage for a stopped instance, or data transfer fees. For a stopped instance AWS will only charge you for the storage for any Amazon EBS volumes.

Which of the following will impact the price paid for an EC2 instance? (Choose two)

Storage capacity AND Instance type - EC2 instance pricing varies depending on many variables: - The buying option (On-demand, Reserved, Spot, Dedicated) - Selected AMI - Selected instance type - Region - Data Transfer in/out - Storage capacity.

How much data can you store in S3?

Storage capacity is virtually unlimited. - The total volume of data and number of objects you can store are unlimited.

Which of the following procedures can reduce latency to your end users? (Choose two)

Store media assets in S3 and use CloudFront to distribute these assets AND Store media assets in the region closest to your end users - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. CloudFront is the best solution to reduce latency if you have users from different places around the world. Storing media assets in a region closer to the end-users can help reduce latency for those users. This is because these assets will travel a shorter distance over the network.

Which of the following will create subsections of a VPC?

Subnet - A subnet is used to create subsections of a network like a VPC.

Which of the following is true about subnets?

Subnets cannot span AZs - Subnets separate a network into subsections. Subnets currently do not have the ability to span across Availability Zones, they can only exist in the AZ where they were created.

Which of the following accurately describes the function of the AWS Cost Explorer?

The AWS Cost Explorer is a free, easy to use tool that allows for viewing charts and usage history in order to manage AWS costs over time. - Indeed, the AWS Cost Explorer is an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time.

In this scenario, we have an IAM User with an AWSDenyAll policy, but this user is also in an IAM Group with access to various AWS services. These services include S3, EC2, VPC, and IAM. Which of the following resources can this user access?

The IAM user cannot access any of the AWS services

Which of the following describes the function of the TCO Calculator?

The TCO Calculator is a free tool that allows for an estimation of the cost savings to be had by migrating to the AWS Cloud from an on-premises datacenter. - Indeed, the TCO Calculator is a free tool that allows for an estimation of the cost savings to be had by migrating to the AWS Cloud from an on-premises datacenter.

What is scalability?

The ability to easily scale up in size, capacity, and/or scope when required. - Scalability is the ability to easily grow in size, capacity, and/or scope when required. In the cloud, this is typically done to meet demand.

Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?

The ability to monitor and improve system processes and procedures - The 5 Pillars of the AWS Well-Architected Framework: 1- Operational Excellence 2- Security 3- Reliability 4- Performance Efficiency 5- Cost Optimization

What purchasing option does AWS make available so you pay lower prices for compute resources?

The ability to pay upfront to get lower hourly costs - With Reserved Instances, you can save up to 75% over equivalent on-demand capacity. When you buy Reserved Instances, the larger the upfront payment, the greater the discount.

Which of the following are factors to consider for Amazon EBS pricing? (Choose two)

The amount of data transferred out of your application AND The amount of GB you provision per month - Amazon EBS pricing has three factors: 1- Volumes: Volume storage for all EBS volume types is charged by the amount of GB you provision per month, until you release the storage. 2- Snapshots: Snapshot storage is based on the amount of space your data consumes in Amazon S3. Because Amazon EBS does not save empty blocks, it is likely that the snapshot size will be considerably less than your volume size. Copying EBS snapshots is charged based on the volume of data transferred across regions. For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. For each incremental snapshot, only the changed part of your Amazon EBS volume is saved. After the snapshot is copied, standard EBS snapshot charges apply for storage in the destination region. 3- Data transfer: Consider the amount of data transferred out of your application. Inbound data transfer is free, and outbound data transfer charges are tiered.

Which of these statements is true of subnets?

The default VPC already has subnets created for each AZ by default. Subnets cannot span AZs. We can add one or more subnets in each AZ.

Which of the following is a benefit of the "Loose Coupling" approach?

The development team can modify the underlying implementation without affecting other components of the application - As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components.

Which of the following are factors in determining the appropriate database technology to use for a specific workload? (Choose two)

The number of reads and writes per second AND The nature of the queries - The following questions can help you take decisions on which solutions to include in your architecture: - Is this a read-heavy, write-heavy, or balanced workload? How many reads and writes per second are you going to need? How will those values change if the number of users increases? - How much data will you need to store and for how long? How quickly do you foresee this will grow? Is there an upper limit in the foreseeable future? What is the size of each object (average, min, max)? How are these objects going to be accessed? - What are the requirements in terms of durability of data? Is this data store going to be your "source of truth"? - What are your latency requirements? How many concurrent users do you need to support? - What is your data model and how are you going to query the data? Are your queries relational in nature (e.g.,JOINs between multiple tables)? Could you denormalize your schema to create flatter data structures that are easier to scale? - What kind of functionality do you require? Do you need strong integrity controls or are you looking for more flexibility (e.g.,schema-less data stores)? Do you require sophisticated reporting or search capabilities? Are your developers more familiar with relational databases than NoSQL?

Which of the following will affect how much you are charged for storing objects in S3? (Choose two)

The storage class used for the objects stored AND The total size in gigabytes of all objects stored - S3 pricing is based on four factors: 1- The storage class you have chosen. 2- The total amount of data (in GB) you've stored. 3- Data Transfer Out. 4- Number of Requests.

You have just hired a skilled sys-admin to join your team. As usual, you have created a new IAM user for him to interact with AWS services. On his first day, you ask him to create snapshots of all existing Amazon EBS volumes and save them in a new Amazon S3 bucket. However, the new member reports back that he is unable to create neither EBS snapshots nor S3 buckets. What might prevent him from doing this simple task?

There is a non-explicit deny to all new users - When a new IAM user is created, that user has NO access to any AWS service. This is called a non-explicit deny. For that user, access must be explicitly allowed via IAM permissions.

What are the benefits of implementing a tagging strategy for AWS resources? (Choose two)

Track AWS spending across multiple resources AND Quickly identify resources that belong to a specific project - Amazon Web Services (AWS) allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by purpose, owner, environment, or other criteria. An effective tagging strategy will give you improved visibility and monitoring, help you create accurate chargeback/showback models, and get more granular and precise insights into usage and spend by applications and teams.

Which of the following affects Amazon CloudFront costs? (Choose two)

Traffic Distribution AND Number of Requests - When you want to estimate the costs of Amazon CloudFront consider the following: ** Data Transfer Out. ** Traffic distribution. ** Number of requests.

Which of the following is a type of MFA device that customers can use to protect their AWS resources?

U2F Security Key - AWS multi-factor authentication (AWS MFA) provides an extra level of security that customers can apply to their AWS environment. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for the AWS account resources. AWS supports several MFA device options including Virtual MFA devices, Universal 2nd Factor (U2F) security key, and Hardware MFA devices.

A company is running a large web application that needs to be available all the time. They want to ensure that all servers are working perfectly. One of the aspects to consider monitoring is CPU usage. The application tends to slow down when CPU usage is greater than 60%. How can they track down when CPU usage goes above 60% for any of the EC2 Instances?

Use CloudWatch Alarms - Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

A company wants to reduce their overall AWS costs but they don't know where the high costs come from. What should they do? (Choose two)

Use CloudWatch to create billing alerts that notify them when their usage of their services exceeds thresholds that they define AND Activate cost allocation tags to categorize and track their costs - A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. Enabling billing alerts using CloudWatch will make it easier to track and manage your spending. The alarm triggers when your account billing exceeds the threshold you specify. Billing alerts can help prevent unexpected spend increases which may be due to unauthorized AWS account or Unknown EC2 instance usage, resources which have been provisioned in your account but are no longer in use or due to higher traffic load that can increase the utilization of all of your resources.

For compliance and regulatory purposes, a government agency requires that their applications must run on hardware that's dedicated to them only. How can you meet this requirement?

Use EC2 Dedicated Hosts - When you launch instances on a Dedicated Host, the instances run on a physical server that is dedicated for your use. While Dedicated instances also run on dedicated hardware, Dedicated Hosts provide further visibility and control by allowing you to place your instances on a specific, physical server. This enables you to deploy instances using configurations that help address corporate compliance and regulatory requirements. Note: Amazon EC2 purchasing options include: On-Demand, Savings Plans, Reserved Instances, Spot Instances, Dedicated Hosts and Dedicated instances. Dedicated Instances also provides Hardware isolation. Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances.

The AWS Cloud elasticity enables you to save more costs compared to traditional hosting providers. How can you apply this concept in your own work environment? (Choose two)

Use Serverless Computing whenever possible AND Set up Amazon EC2 Auto Scaling - Another way you can save money with AWS is by taking advantage of the platform's elasticity. Elasticity means the ability to scale up or down when needed. This concept is most closely associated with the AWS auto scaling which monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost. Serverless Computing provides the highest level of elasticity. Serverless enables you to build modern applications with increased agility and lower total cost of ownership. Serverless allows you to run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. With serverless computing, everything required to run and scale your application with high availability is handled for you.

What are the benefits of using on-demand EC2 instances? (Choose two)

You can increase or decrease your compute capacity depending on the demands of your application AND They remove the need to buy "safety net" capacity to handle periodic traffic spikes - With On-Demand instances, you pay for compute capacity by the hour with no long-term commitments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand instances also remove the need to buy "safety net" capacity to handle periodic traffic spikes.

Which of the following approaches will help you eliminate human error and automate the process of creating and updating your AWS environment?

Use code to provision and operate your AWS infrastructure - In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events. By performing operations as code, you limit human error and enable consistent responses to events. You can define your infrastructure as code using approaches such as AWS CloudFormation templates. The use of templates allows you to build and rebuild your infrastructure, without having to perform manual actions or write custom scripts. Codifying your infrastructure in a template allows you to treat your infrastructure as just code. You can author it with any code editor, check it into a version control system, and review the files with team members before deploying into production. This gives developers an easy way to build and update their entire AWS environment in a timely fashion.

Which of the following procedures will help reduce your Amazon S3 costs?

Use the right combination of storage classes based on different use cases. - Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.

Why do many startup companies prefer AWS over traditional on-premises solutions? (Choose TWO)

Using AWS allows companies to replace large capital expenditure with low variable costs AND Using AWS, they can reduce time-to-market by focusing on business activities rather than on building and managing data centers - Instead of building and managing data centers, AWS provides startups, enterprises, and government agencies all the services they need to quickly build their business and grow faster. AWS has significantly more services, and more features within those services, than any other cloud provider - from infrastructure technologies like compute, storage, and databases -to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. This makes it faster, easier, and more cost effective to build nearly anything they can imagine. Capital expenditures (CapEx) are a company's major, long-term expenses. Examples of CAPEX include physical assets such as buildings, equipment, and machinery. Instead of having to invest heavily in these Capital expenditures (e.g. physical data centers and servers) before it is known they will be used, companies can pay only when consuming AWS resources, and pay only for how much they consume. In brief, AWS replaces their investments in large capital expenditures (CAPEX) with low variable "pay-as-you-go" costs.

What is the easiest way to launch and manage a virtual private server in AWS?

Using Amazon Lightsail - Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS. Lightsail plans include everything you need to jumpstart your project -a virtual machine, SSD-based storage, data transfer, DNS management, and a static IPaddress-for a low, predictable price.

How does AWS notify customers about security and privacy events pertaining to AWS services?

Using Security Bulletins - AWS publishes security bulletins about the latest security and privacy events with AWS services on the Security Bulletins page.

Which of the following strategies help analyze costs in AWS?

Using tags to group resources - Tags are key-value pairs that allow you to organize your AWS resources into groups. You can use tags to: 1- Visualize information about tagged resources in one place, in conjunction with Resource Groups. 2- View billing information using Cost Explorer and the AWS Cost and Usage report. 3- Send notifications about spending limits using AWS Budgets. It is recommended to use logical groupings of your resources that make sense for your infrastructure or business. You could organize your resources by: Project, Cost center, Development environment, Application or Department. For example, if you tag resources with an application name, you can track the total cost of a single application that runs on those resources.

You have migrated your application to AWS recently. How can you view all the information you need about the AWS costs applied to your account?

Using the AWS Cost & Usage Report - The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage. The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.

Which of the following can be used to protect data at rest on Amazon S3? (Choose two)

Versioning AND Permissions - Amazon S3 provides a number of security features for the protection of data at rest, which you can use or not depending on your threat profile: 1- Permissions: Use bucket-level or object-level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure, data integrity compromise or deletion. 2- Versioning: Amazon S3 supports object versions. Versioning is disabled by default. Enable versioning to store a new version for every modified or deleted object from which you can restore compromised objects if necessary. 3- Replication: Amazon S3 replicates each object across all Availability Zones within the respective region. Replication can provide data and service availability in the case of system failure, but provides no protection against accidental deletion or data integrity compromise - it replicates changes across all Availability Zones where it stores copies. 4- Backup: You can use application-level technologies to manually back up data stored in Amazon S3 to other AWS regions or to on-premises backup systems. 5- Encryption - server side: Amazon S3 supports server-side encryption of user data. Server-side encryption is transparent to the end user. AWS generates a unique encryption key for each object, and then encrypts the object using AES-256. 6- Encryption - client side: With client-side encryption you create and manage your own encryption keys. Keys you create are not exported to AWS in clear text. Your applications encrypt data before submitting it to Amazon S3, and decrypt data after receiving it from Amazon S3. Data is stored in an encrypted form, with keys and algorithms only known to you.

What does VPC stand for?

Virtual Private Cloud

You are working on two projects that require completely different network configurations. Which AWS service will allow you to isolate resources and network configurations?

Virtual Private Cloud - Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of the IP address range, creation of subnets, and configuration of route tables and network gateways.

What is the maximum amount of data that can be stored in S3 in a single AWS account?

Virtually unlimited storage - The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.

Which of the following are examples of endpoints for an SNS topic? Please select the most appropriate answer.

Webserver, email address, Amazon SQS queue, SMS, and AWS Lambda function - A webserver, email address, Amazon SQS queue, SMS, and AWS Lambda function can all be endpoints to an SNS topic. This is the most correct answer out of the choices.

Which of these types of operating systems will we see in AWS?

Windows/Linux - Why is this correct? Windows is an operating system that we will see when using AWS and Linux is an operating system that we will see when using AWS.

Which statement best describes the AWS Pay-As-You-Go pricing model?

With AWS, you replace large capital expenses with low variable payments - AWS does not require minimum spend commitments or long-term contracts. You replace large fixed upfront expenses with low variable payments that only apply based on what you use. For example, when using On-demand instances you pay only for the hours\seconds they are running and nothing more.

Why are Serverless Architectures more economical than Server-based Architectures?

With the Server-based Architectures, servers continue to run all the time but with the serverless architectures the code runs only when needed - Serverless architectures can reduce costs because you don't have to manage or pay for underutilized servers, or provision redundant infrastructure to implement high availability. For example, you can upload your code to the AWS Lambda compute service, and the service can run the code on your behalf using AWS infrastructure. With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered.

Which statement is true in relation to AWS pricing? (Choose two)

You only pay for the individual services that you need with no long term contracts AND With AWS, you don't have to pay any money upfront - AWS provides three pricing models: 1- Pay-as-you-go 2- Save when you reserve 3- Pay less by using more With the AWS pay-as-you-go model, you only pay for what you consume, you don't have to pay any money upfront and there are no long term contracts.

Which statement is true in relation to the security of Amazon EC2?

You should regularly patch the operating system and applications on your EC2 instances - Amazon EC2 is not a managed service, you should regularly patch, update, and secure the operating system and applications on your instance.

By default, Security Groups ________________

allow all outbound traffic and deny all inbound traffic - By default, when Security Groups are created they do not have any default inbound rules, therefore denying all inbound traffic until inbound rules are added. They also include a default outbound rule that allows all outbound traffic. This rule can be removed and outbound rules can be added to allow specific outbound traffic.

NACLs _____________ traffic on the________________

allow/deny, subnet level - Why is this correct? NACLs allow/deny traffic on the subnet level. Video for reference: Elastic Cloud Compute Instances Part 2

Community AMIs are _________________

free to use - Why is this correct? Community AMIs are free to use.

What are the benefits of using an AWS-managed service? (Choose two)

​ Allows customers to deliver new solutions faster AND Lowers operational complexity - AWS services that are managed lower operational complexity by automating time-consuming administration tasks such as hardware provisioning, software setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, security and compatibility they need. Because these services are instantly available to developers, they reduce dependency on in-house specialized skills and allow organizations to deliver new solutions faster

A company has created a solution that helps AWS customers improve their architectures on AWS. Which AWS program may support this company?

APN Consulting Partners - APN Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers. AWS supports the APN Consulting Partners by providing a wide range of resources and training to support their customers.

You have noticed that several critical Amazon EC2 instances have been terminated. Which of the following AWS services would help you determine who took this action?

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

Which of the following helps a customer delve deep into the Amazon EC2 billing activity for the past month?

AWS Cost & Usage Reports - The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage.The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.

Hundreds of thousands of DDoS attacks are recorded every month worldwide. What does AWS provide to protect from these attacks? (Choose two)

AWS Shield AND AWS WAF - AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures that follow AWS Best Practices for DDoS Resiliency. These include services such as Amazon Route 53, Amazon CloudFront, Elastic Load Balancing, and AWS WAF to control and absorb traffic, and deflect unwanted requests. These services integrate with AWS Shield, a managed DDoS protection service that provides always-on detection and automatic inline mitigations to safeguard web applications running on AWS.

Which of the following AWS offerings is a MySQL-compatible relational database that can scale capacity automatically based on demand?

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial-grade databases at 1/10th the cost. Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. Amazon Aurora features "Amazon Aurora Serverless" which is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs.

What is the AWS service that provides you the highest level of control over the underlying virtual infrastructure?

Amazon EC2 - Amazon EC2 provides you the highest level of control over your virtual instances, including root access and the ability to interact with them as you would any machine.

Which of the following services allows you to run containerized applications on a cluster of EC2 instances?

Amazon ECS - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

There is a need to import a large amount of structured data into a database service. What is the AWS database service that best achieves this?

Amazon RDS - Since the data is structured, then it is best to use a relational database service such as Amazon RDS.

Which of the below is a best-practice when designing solutions on AWS?

Automate wherever possible to make architectural experimentation easier - The Well-Architected Framework identifies a set of general design principles to facilitate good design in the cloud: 1- Stop guessing your capacity needs 2- Test systems at production scale 3- Automate to make architectural experimentation easier 4- Allow for evolutionary architectures 5- Drive architectures using data 6- Improve through game days

Which of the following EC2 instance purchasing options supports the Bring Your Own License (BYOL) model for almost every BYOL scenario?

Dedicated Hosts - You have a variety of options for using new and existing Microsoft software licenses on the AWS Cloud. By purchasing Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Relational Database Service (Amazon RDS) license-included instances, you get new, fully compliant Windows Server and SQL Server licenses from AWS. The BYOL model enables AWS customers to use their existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server. Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances or EC2 instances with default tenancy using Microsoft License Mobility through Software Assurance. Dedicated Hosts provide additional control over your instances and visibility into Host level resources and tooling that allows you to manage software that consumes licenses on a per-core or per-socket basis, such as Windows Server and SQL Server. This is why most BYOL scenarios are supported through the use of Dedicated Hosts, while only certain scenarios are supported by Dedicated Instances.

Which of the following is one of the benefits of AWS security?

Scales quickly with your AWS usage - Security scales with your AWS Cloud usage. No matter the size of your business, the AWS infrastructure is designed to keep your data safe.

A startup company is operating on limited funds and is extremely concerned about cost overruns. Which of the below options can be used to notify the company when their monthly AWS bill exceeds $2000?

Setup a CloudWatch billing alarm that triggers an SNS notification to their email address - In CloudWatch, you can set up a billing alarm that triggers if your costs exceed a threshold that you set. This CloudWatch alarm can also be configured to trigger an SNS notification to your email address.

As part of the Enterprise support plan, who is the primary point of contact for ongoing support needs?

TAM - For Enterprise-level customers, a TAM (Technical Account Manager) provides technical expertise for the full range of AWS services and obtains a detailed understanding of your use case and technology architecture. TAMs work with AWS Solution Architects to help you launch new projects and give best practices recommendations throughout the implementation life cycle. Your TAM is the primary point of contact for ongoing support needs, and you have a direct telephone line to your TAM.

What does Amazon Elastic Beanstalk provide?

A PaaS solution to automate application deployment - AWS Elastic Beanstalk is an application container on top of Amazon Web Services. Elastic Beanstalk makes it easy for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application code, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

What are the default security credentials that are required to access the AWS management console for an IAM user account?

A user name and password - The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can only access the AWS management console if you have a valid user name and password.

A company has deployed a new web application on multiple Amazon EC2 instances. Which of the following should they use to ensure that the incoming HTTP traffic is distributed evenly across the instances?

AWS Application Load Balancer - Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. Elastic Load Balancing offers three types of load balancers: 1- Application Load Balancer. 2- Network Load Balancer. 3- Classic Load Balancer. Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic. In our case, the application receives HTTP traffic. Hence, the Application Load Balancer is the correct answer here.

Which of the following services allows customers to manage their agreements with AWS?

AWS Artifact - AWS Artifact is a self-service audit artifact retrieval portal that provides customers with on-demand access to AWS' compliance documentation and AWS agreements. You can use AWS Artifact Agreements to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA). Additional information: You can also use AWS Artifact Reports to download AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Control (SOC) reports.

A company complains that they are wasting a lot of money on underutilized compute resources in AWS. Which AWS feature should they use to ensure that their applications are automatically adding/removing compute capacity to closely match the required demand?

AWS Auto Scaling - Auto scaling is the feature that automates the process of adding/removing the server capacity (based on demand). Autoscaling allows you to reduce your costs by automatically turning off resources that aren't in use. On the other hand, Autoscaling ensures that your application runs effectively by provisioning more server capacity if required.

What is the AWS service that enables AWS architects to manage infrastructure as code?

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; AWS CloudFormation handles all that for you.

A company has decided to migrate its Oracle database to AWS. Which AWS service can help achieve this without negatively impacting the functionality of the source database?

AWS Database Migration Service - AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse. AWS Database Migration Service can also be used for continuous data replication with high availability.

What does Amazon CloudFront use to distribute content to global users with low latency?

AWS Edge Locations - To deliver content to global end users with lower latency, Amazon CloudFront uses a global network of Edge Locations and Regional Edge Caches in multiple cities around the world. Amazon CloudFront uses this network to cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, improving performance for your end-users, while reducing the load on the origin servers.

What is the AWS serverless service that allows you to run your applications without any administrative burden?

AWS Lambda - AWS Lambda is an AWS-managed compute service. It lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You pay only for the compute time you consume - there is no charge when your code is not running.

What is the AWS feature that provides an additional level of security above the default authentication mechanism of usernames and passwords?

AWS MFA - AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of using just your user name and password to authenticate.

AWS allows users to manage their resources using a web based user interface. What is the name of this interface?

AWS Management Console - The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can also use the AWS Console mobile app to quickly view resources on the go.

What does AWS provide to deploy popular technologies - such as IBM MQ - on AWS with the least amount of effort and time?

AWS Quick Start reference deployments - AWS Quick Start Reference Deployments outline the architectures for popular enterprise solutions on AWS and provide AWS CloudFormation templates to automate their deployment. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability. Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices. These accelerators reduce hundreds of manual installation and configuration procedures into just a few steps, so you can build your production environment quickly and start using it immediately.

A company has an Enterprise Support subscription. They want quick and efficient guidance with their billing and account inquiries. Which of the following should the company use?

AWS Support Concierge - Included as part of the Enterprise Support plan, the Support Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. The Concierge team will quickly and efficiently assist you with your billing and account inquiries, and work with you to help implement billing and account best practices so that you can focus on running your business. Support Concierge service includes: ** 24 x7 access to AWS billing and account inquires. ** Guidance and best practices for billing allocation, reporting, consolidation of accounts, and root-level account security. ** Access to Enterprise account specialists for payment inquiries, training on specific cost reporting, assistance with service limits, and facilitating bulk purchases.

Your company is developing a critical web application in AWS and the security of the application is one of the top priorities. Which of the following AWS services will provide infrastructure security optimization recommendations?

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. AWS Trusted Advisor improves the security of your application by closing gaps, enabling various AWS security features, and examining your permissions.

Which of the following services can help protect your web applications from SQL injection and other vulnerabilities in your application code?

AWS WAF - AWS WAF (Web Application Firewall) helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.

There are performance issues with your under-development application, which of the following AWS services would help you analyze these issues?

AWS X-Ray - AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application's underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.

Which of the following must an IAM user provide to interact with AWS services using the AWS Command Line Interface (AWS CLI)?

Access keys - Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests to AWS using the CLI or the SDK.

Amazon Glacier is an Amazon S3 storage class that is suitable for storing ____________ & ______________. (Choose two)

Active archives & Long-term analytic data - Amazon S3 Glacier provides three retrieval options to fit your use case. Expedited retrievals typically return data in 1-5 minutes, and are best used for Active Archive use cases. Standard retrievals typically complete between 3-5 hours work, and work well for less time-sensitive needs like backup data, media editing, or long-term analytics. Bulk retrievals are the lowest-cost retrieval option, returning large amounts of data within 5-12 hours.

Which of the following is an example of horizontal scaling in the AWS Cloud?

Adding more EC2 instances to handle an increase in traffic. - Horizontal Scaling: Scaling horizontally takes place through an increase in the number of resources (e.g., adding more hard drives to a storage array or adding more servers to support an application). This is a great way to build Internet-scale applications that leverage the elasticity of cloud computing. Vertical Scaling: Scaling vertically takes place through an increase in the specifications of an individual resource (e.g., upgrading a server with a larger hard drive, adding more memory, or provisioning a faster CPU). On Amazon EC2, this can easily be achieved by stopping an instance and resizing it to an instance type that has more RAM, CPU, I/O,or networking capabilities. This way of scaling can eventually hit a limit and it is not always a cost efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially as a short term solution.

You have set up consolidated billing for several AWS accounts. One of the accounts has purchased a number of reserved instances for 3 years. Which of the following is true regarding this scenario?

All accounts can receive the hourly cost benefit of the Reserved Instances. - For billing purposes, the consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account. This means that all accounts in the organization can receive the hourly cost benefit of Reserved Instances that are purchased by any other account. For example, Suppose that Fiona and John each have an account in an organization. Fiona has five Reserved Instances of the same type, and John has none. During one particular hour, Fiona uses three instances and John uses six, for a total of nine instances on the organization's consolidated bill. AWS bills five instances as Reserved Instances, and the remaining four instances as On-demand instances.

A developer is planning to build a two-tier web application that has a MySQL database layer. Which of the following AWS database services would provide automated backups to his application?

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL and up to three times the throughput of standard PostgreSQL. Amazon Aurora is designed to be compatible with MySQL and with PostgreSQL, so that existing applications and tools can run without requiring modification. It is available through Amazon Relational Database Service (RDS), freeing you from time-consuming administrative tasks such as provisioning, patching, backup, recovery, failure detection, and repair.

Which AWS service uses Edge Locations to cache content?

Amazon CloudFront - Amazon CloudFront is a content caching service provided by AWS that uses Edge Locations (which are AWS data centers located all around the world) to reduce network latency when delivering content to end users.

A company is planning to host an educational website on AWS. Their video courses will be streamed all around the world. Which of the following AWS services will help achieve high transfer speeds?

Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

Which of the following can be described as a global content delivery network (CDN) service?

Amazon CloudFront - Amazon CloudFront is a global content delivery network (CDN) service that gives businesses and web application developers an easy and cost effective way to distribute content (such as videos, data, applications, and APIs) with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations. CloudFront is integrated with other AWS services such as AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code close to your viewers.

You have deployed your application on multiple Amazon EC2 instances. Your customers complain that sometimes they can't reach your application. Which AWS service allows you to monitor the performance of your EC2 instances to assist in troubleshooting these issues?

Amazon CloudWatch - Amazon CloudWatch is a service that monitors AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use CloudWatch to detect anomalous behavior in your environments, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.

What is the AWS database service that allows you to upload data structured in key-value format?

Amazon DynamoDB - Amazon DynamoDB is a NoSQL database service. NoSQL databases are used for non-structured data that are typically stored in JSON-like, key-value documents.

Your company has a data store application that requires access to a NoSQL database. Which AWS database offering would best meet this requirement?

Amazon DynamoDB - Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

Which of the following are examples of AWS-Managed Services, where AWS is responsible for the operational and maintenance burdens of running the service? (Choose TWO)

Amazon DynamoDB AND Amazon Elastic MapReduce - For managed services such as Amazon Elastic MapReduce (Amazon EMR) and DynamoDB, AWS is responsible for performing all the operations needed to keep the service running. Amazon EMR launches clusters in minutes. You don't need to worry about node provisioning, infrastructure setup, Hadoop configuration, or cluster tuning. Amazon EMR takes care of these tasks so you can focus on analysis. DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities.

Where can you store files in AWS? (Choose two)

Amazon EFS AND Amazon EBS - ** Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones. ** Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.

An organization needs to analyze and process a large number of data sets. Which AWS service should they use?

Amazon EMR - Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.

A company is deploying a new two-tier web application in AWS. Where should the most frequently accessed data be stored so that the application's response time is optimal?

Amazon ElastiCache - Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.

The identification process of an online financial services company requires that new users must complete an online interview with their security team. After verifying users' identities, the recorded interviews are only required in the event of a legal issue or a regulatory compliance breach. What is the most cost-effective service to store the recorded videos?

Amazon Glacier - Amazon Glacier is an extremely low-cost storage service that provides secure, durable, and flexible storage for long-term data backup and archival. With Amazon Glacier, customers can reliably store their data for as little as $0.004 per gigabyte per month. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so that they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.

A company has moved to AWS recently. Which of the following would help them ensure that the right security settings are put in place? (Choose two)

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of a detailed assessment report which is available via the Amazon Inspector console or API. To help get started quickly, Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security researchers. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. Like your customized cloud security expert, AWS Trusted Advisor analyzes your AWS environment and provides security recommendations to protect your AWS environment. The service improves the security of your applications by closing gaps, examining permissions, and enabling various AWS security features.

You work as an on-premises MySQL DBA. The work of database configuration, backups, patching, and DR can be time-consuming and repetitive. Your company has decided to migrate to the AWS Cloud. Which of the following can help save time on the regular database tasks so you can focus on giving users the fast performance and high availability that they need?

Amazon RDS - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity while automating time-consuming administration tasks such as hardware provisioning, operating system maintenance, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Which service provides object-level storage in AWS?

Amazon S3 - Amazon S3 is an object level storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.

Which of the following S3 storage classes is ideal for data with unpredictable access patterns?

Amazon S3 Intelligent-Tiering - The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering, and moves the ones that have not been accessed for 30 consecutive days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. There are no retrieval fees when using the S3 Intelligent-Tiering storage class, and no additional tiering fees when objects are moved between access tiers. It is the ideal storage class for long-lived data with access patterns that are unknown or unpredictable.

Which of the following is NOT a benefit of Amazon S3? (Choose TWO)

Amazon S3 can run any type of application or backend system AND Amazon S3 can be scaled manually to store and retrieve any amount of data from anywhere. "Amazon S3 can run any type of application or backend system" is not a benefit of S3 and thus is a correct answer. Amazon S3 is a storage service not a compute service. "Amazon S3 can be scaled manually to store and retrieve any amount of data from anywhere" is not a benefit of S3 and thus is a correct answer. Amazon S3 scales automatically to store and retrieve any amount of data from anywhere.

Which service is used to ensure that messages between software components are not lost if one or more components fail?

Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.

What is the AWS service that provides a virtual network dedicated to your AWS account?

Amazon VPC - Amazon Virtual Private Cloud (Amazon VPC) allows you to carve out a portion of the AWS Cloud that is dedicated to your AWS account. Amazon VPC enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

In order to implement best practices when dealing with a "Single Point of Failure," you should aim to build as much automation as possible in both detecting and reacting to failure. Which of the following AWS services would help? (Choose two)

Auto Scaling and ELB - You should aim to build as much automation as possible in both detecting and reacting to failure. You can use services like ELB and Amazon Route53 to configure health checks and mask failure by only routing traffic to healthy endpoints. In addition, Auto Scaling can be configured to automatically replace unhealthy nodes. You can also replace unhealthy nodes using the Amazon EC2 auto-recovery feature or services such as AWS OpsWorks and AWS Elastic Beanstalk. It won't be possible to predict every possible failure scenario on day one. Make sure you collect enough logs and metrics to understand normal system behavior. After you understand that, you will be able to set up alarms that trigger automated response or manual intervention.

Which of the below options are related to the reliability of AWS? (Choose two)

Automatically provisioning new resources to meet demand. AND Ability to recover quickly from failures. - The reliability term encompasses the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. The automatic provisioning of resources and the ability to recover from failures meet these criteria.

The principle "design for failure and nothing will fail" is very important when designing your AWS Cloud architecture. Which of the following would help adhere to this principle? (Choose two)

Availability Zones AND AWS Elastic Load Balancer - Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. When designing your AWS Cloud architecture, you should make sure that your system will continue to run even if failures happen. You can achieve this by deploying your AWS resources in multiple Availability zones. Availability zones are isolated from each other, therefore if one availability zone goes down, the other AZ's will still be up and running and hence your application will be more fault tolerant. In addition to availability zones you can build a disaster recovery solution by deploying your AWS resources in other regions. If an entire region goes down you will still have resources in another region able to continue to provide a solution. Finally, you can use the Elastic Load Balancer to regularly perform health checks and distribute traffic only to the healthy instances.

Using Amazon RDS falls under the shared responsibility model. Which of the following are customer responsibilities? (Choose two)

Building the relational database schema. AND Managing the database settings. - Amazon RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. Since Amazon RDS provides native database access, you interact with the relational database software as you normally would. This means you're still responsible for managing the database settings that are specific to your application. You'll need to build the relational schema that best fits your use case and are responsible for any performance tuning to optimize your database for your application's workflow.

One of the most important AWS best-practices to follow is the cloud architecture principle of elasticity. How does following this principle improve your architecture's design?

By automatically provisioning the required AWS resources based on changes in demand - Before cloud computing, you had to overprovision infrastructure to ensure you had enough capacity to handle your business operations at the peak level of activity. Now, you can provision the amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business. This reduces costs and improves your ability to meet your users' demands. The concept of Elasticity involves the ability of a service to automatically scale its resources up or down based on changes in demand. For example, Amazon EC2 Autoscaling can help automate the process of adding or removing Amazon EC2 instances as demand increases or decreases.

How can you view the distribution of AWS spending in one of your AWS accounts?

By using AWS Cost Explorer - AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.

Select TWO examples of the AWS shared controls.

Configuration Management AND Patch Management Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include: ** Patch Management - AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications. ** Configuration Management - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications. ** Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.

Under the shared responsibility model, Which of the following is the AWS' responsibility?

Configuring infrastructure devices - Under the shared responsibility model, AWS is responsible for the hardware and software that run AWS services. This includes patching the infrastructure software and configuring infrastructure devices. As a customer, you are responsible for implementing best practices for data encryption, patching guest operating system and applications, identity and access management, and network & firewall configurations.

What are the security aspects that the AWS customer is responsible for? (Choose two)

Configuring network access rules AND Set password complexity rules - The customer is responsible for securing their network by configuring Security Groups, Network Access control Lists (NACLs), and Routing Tables. The customer is also responsible for setting a password policy on their AWS account that specifies the complexity and mandatory rotation periods for their IAM users' passwords.

You have discovered that some AWS resources are being used in malicious activities that could compromise your data. What should you do?

Contact the AWS Abuse team - The AWS Abuse team can assist you when AWS resources are being used to engage in the following types of abusive behavior: I. Spam: II. Port scanning: III. Denial of service attacks (DOS): V. Hosting objectionable or copyrighted content: VI. Distributing malware:

An organization has decided to reserve EC2 compute capacity for three years in order to get more discounts. Their application workloads are likely to change during this time period. What is the EC2 Reserved Instance (RI) type that will allow them to modify the reservation whenever they need to?

Convertible RIs - Convertible RIs provide a discount (up to 54% off On-Demand) and the capability to change the attributes of the RI as long as the exchange results in the creation of Reserved Instances of equal or greater value. These attributes include instance family, instance type, platform, scope, and tenancy.

Sarah has deployed an application in the Northern California (us-west-1) region. After examining the application's traffic, she notices that about 30% of the traffic is coming from Asia. What can she do to reduce latency for the users in Asia?

Create a CDN using CloudFront, so that content is cached at Edge Locations close to and in Asia - CloudFront is AWS's content delivery network (CDN) service. Amazon CloudFront employs a global network of edge locations and regional edge caches that cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, reducing latency and improving the overall performance.

Based on the AWS Shared Responsibility Model, which of the following are the sole responsibility of AWS? (Choose two)

Creating hypervisors AND Hardware maintenance - AWS is responsible for items such as the physical security of its data centers, creating hypervisors, replacement of old disk drives, and patch management of the infrastructure. The customers are responsible for items such as building application schema, analyzing network performance, configuring security groups and network ACLs and encrypting their data.

Which of the below is a best-practice when building applications on AWS?

Decouple the components of the application so that they run independently - An application should be designed in a way that reduces interdependencies between its components. A change or a failure in one component should not cascade to other components. If the components of an application are tightly-coupled (interconnected) and one component fails, the entire application will also fail. Amazon SQS and Amazon SNS are powerful tools that help you build loosely-coupled applications. SQS and SNS can be integrated together to decouple application components so that they run independently, increasing the overall fault tolerance of the application. Understanding how SQS and SNS services work is not required for the Cloud Practitioner level, but let's just take a simple example, let say you have two components in your application, Component A & Component B. Component A sends messages (jobs) to component B to process. Now, what happens if component A sends a large number of messages at the same time? Component B will fail, and the entire application will fail. SQS act as a middleman, receives and stores messages from component A, and component B pull and process messages at its own pace. This way, both components run independently from each other.

A company has developed an eCommerce web application in AWS. What should they do to ensure that the application has the highest level of availability?

Deploy the application across multiple Regions and Availability Zones - The AWS Global infrastructure is built around Regions and Availability Zones (AZs). Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. Availability Zones in a region are connected with low latency, high throughput, and highly redundant networking. These Availability Zones offer AWS customers an easier and more effective way to design and operate applications and databases, making them more highly available, fault tolerant, and scalable than traditional single datacenter infrastructures or multi-datacenter infrastructures. In addition to replicating applications and data across multiple data centers in the same Region using Availability Zones, you can also choose to increase redundancy and fault tolerance further by replicating data between geographic Regions (especially if you are serving customers from all over the world). You can do so using both private, high speed networking and public internet connections to provide an additional layer of business continuity, or to provide low latency access across the globe.

How are AWS customers billed for Linux-based Amazon EC2 usage?

EC2 instances will be billed on one second increments, with a minimum of one minute. - Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed per-second (minimum of 1 minute) for Linux or Ubuntu Instances and as a full hour for all other instance types.

What do you gain from setting up consolidated billing for five different AWS accounts under another master account?

Each AWS account gets volume discounts. - AWS consolidated billing enables an organization to consolidate payments for multiple Amazon Web Services (AWS) accounts within a single organization by making a single paying account. For billing purposes, AWS treats all the accounts on the consolidated bill as one account. Some services, such as Amazon EC2 and Amazon S3 have volume pricing tiers across certain usage dimensions that give the user lower prices when they use the service more. For example if you use 50 TB in each account you would normally be charged $23 *50*3 (because they are 3 different accounts), But with consolidated billing you would be charged $23*50+$22*50*2 (because they are treated as one account) which means that you would save $100.

What are two advantages of using Cloud Computing over using traditional data centers? (Choose two)

Eliminating Single Points of Failure (SPOFs). AND Distributed infrastructure - These are things that traditional web hosting cannot provide: **High-availability (eliminating single points of failure): A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. The best way to understand and avoid the single point of failure is to begin by making a list of all major points of your architecture. You need to break the points down and understand them further. Then, review each of these points and think what would happen if any of these failed. AWS gives you the opportunity to automate recovery and reduce disruption at every layer of your architecture. **Distributed infrastructure: The AWS Cloud operates in over 60 Availability Zones within over 20 geographic Regions around the world, with announced plans for more Availability Zones and Regions, allowing you to reduce latency to users from all around the world. **On-demand infrastructure for scaling applications or tasks: AWS allows you to provision the required resources for your application in minutes and also allows you to stop them when you don't need them. **Cost savings: You don't have to run your own data center for internal or private servers, so your IT department doesn't have to make bulk purchases of servers which may never get used, or may be inadequate. The "pay as you go" model from AWS allows you to pay only for what you use and the ability to scale down to avoid over-spending. With AWS you don't have to pay an entire IT department to maintain that hardware -- you don't even have to pay an accountant to figure out how much hardware you can afford or how much you need to purchase.

What should you do in order to keep the data on EBS volumes safe? (Choose two)

Ensure that EBS data is encrypted at rest AND Create EBS snapshots - Creating snapshots of EBS Volumes can help ensure that you have a backup of your EBS volumes just in case any issues arise. Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.

An organization has a large number of technical employees who operate their AWS Cloud infrastructure. What does AWS provide to help organize them in teams and then assign the appropriate permissions for each team?

IAM Groups - An IAM group is a collection of IAM users that are managed as a unit. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

Using Amazon EC2 falls under which of the following cloud computing models?

IaaS - Infrastructure as a Service (IaaS) contains the basic building blocks for Cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and requires the customer to perform all of the configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Adjusting compute capacity dynamically to reduce cost is an implementation of which AWS cloud best practice?

Implement elasticity - In the traditional data center-based model of IT, once infrastructure is deployed, it typically runs whether it is needed or not, and all the capacity is paid for, regardless of how much it gets used. In the cloud, resources are elastic, meaning they can instantly grow ( to maintain performance) or shrink ( to reduce costs).

You want to create a backup of your data in another geographical location. Where should you create this backup?

In another Region - Since you want to store your backup in another geographical location, then you should create it in another AWS Region. An AWS Region is a physical location around the world where AWS cluster data centers. AWS calls each group of logical data centers an Availability Zone. Each AWS Region consists of multiple, isolated, and physically separate Availability Zones within a geographic area.

What does Amazon ElastiCache provide?

In-memory caching for read-heavy applications. - ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine). In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of common database queries or the results of computationally-intensive calculations.

What are the benefits of having infrastructure hosted in AWS? (Choose two)

Increase speed and agility AND All of the physical security and most of the data/network security are taken care of for you - All of the physical security are taken care of for you. Amazon data centers are surrounded by three physical layers of security. "Nothing can go in or out without setting off an alarm". It's important to keep bad guys out, but equally important to keep the data in which is why Amazon monitors incoming gear, tracking every disk that enters the facility. And "if it breaks we don't return the disk for warranty. The only way a disk leaves our data center is when it's confetti." Most (not all) data and network security are taken care of for you. When we talk about the data/network security, AWS has a "shared responsibility model" where AWS and the customer share the responsibility of securing them. For example the customer is responsible for creating rules to secure his network traffic using the security groups and is also responsible for protecting data with encryption. "Increase speed and agility" is also a correct answer because in a cloud computing environment, new IT resources are only a click away, which means it requires less time to make those resources available to developers - from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

​A company is introducing a new product to their customers, and is expecting a surge in traffic to their web application. As part of their Enterprise Support plan, which of the following provides the company with architectural and scaling guidance?

Infrastructure Event Management - AWS Infrastructure Event Management is a short-term engagement with AWS Support, included in the Enterprise-level Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS.

Availability Zones within a Region are connected over low-latency links. Which of the following is a benefit of these links?

Make synchronous replication of your data possible - Each AWS Region contains multiple distinct locations, or Availability Zones. Each Availability Zone is engineered to be independent from failures in other Availability Zones. An Availability Zone is a data center, and in some cases, an Availability Zone consists of multiple data centers. Availability Zones within a Region provide inexpensive, low-latency network connectivity to other zones in the same Region. This allows you to replicate data across data centers in a synchronous manner so that failover can be automated and appear transparent to your users.

What are the Amazon RDS features that can be used to improve the availability of your database? (Choose two)

Multi-AZ Deployment AND Read Replicas - In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.

Which of the following does NOT belong to the AWS Cloud Computing models?

Networking as a Service (NaaS) - There are three Cloud Computing Models: 1) Infrastructure as a Service (IaaS) 2) Platform as a Service (PaaS) 3) Software as a Service (SaaS)

Which of the following reserved instance payment options result in you paying a discounted hourly rate throughout the duration of the term? (Choose two)

No Upfront option AND Partial Upfront option - You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance: 1- No Upfront: 2- Partial Upfront: 3- All Upfront: Hence, the correct answers are No Upfront and Partial Upfront.

You want to run a questionnaire application for only one day (without interruption), which Amazon EC2 purchase option should you use?

On-demand instances - With On-Demand instances, you pay for compute capacity by the hour with no long-term commitments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand instances also remove the need to buy "safety net" capacity to handle periodic traffic spikes.

According to the AWS Acceptable Use Policy, which of the following statements is true regarding penetration testing of EC2 instances?

Penetration testing can be performed by the customer on their own instances without prior authorization from AWS - AWS customers are welcome to carry out security assessments and penetration tests against their AWS infrastructure without prior approval for 8 services: 1- Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers. 2- Amazon RDS. 3- Amazon CloudFront. 4- Amazon Aurora. 5- Amazon API Gateways. 6- AWS Lambda and Lambda Edge functions. 7- Amazon Lightsail resources. 8- Amazon Elastic Beanstalk environments.

What does the AWS Personal Health Dashboard provide? (Choose two)

Personalized view of AWS service health AND Detailed troubleshooting guidance to address AWS events impacting your resources - AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.

What is the advantage of the AWS-recommended practice of decoupling applications?

Reduces inter-dependencies so that failures do not impact other components of the application. - As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. On the other hand if the components of an application are tightly coupled and one component fails, the entire application will also fail. Therefore when designing your application, you should always decouple its components.

A company needs to host a database in Amazon RDS for at least three years. Which of the following options would be the most cost-effective solution?

Reserved instances (Partial Upfront) - Since the database server will be hosted for a period of at least three years, then it is better to use the RDS Reserved Instances as it provides you with a significant discount compared to the On-Demand Instance pricing for the DB instance. With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The Partial Upfront option is more cost-effective than the No upfront option (The more you spend upfront the more you save).

Which statement is true regarding the AWS shared responsibility model?

Responsibilities vary depending on the services used. - Customers should be aware that their responsibilities may vary depending on the AWS services chosen. For example, when using Amazon EC2, you are responsible for applying operating system and application security patches regularly. However, such patches are applied automatically when using Amazon RDS.

Which service provides DNS in the AWS cloud?

Route 53 - Amazon Route 53 is a global service that provides highly available and scalable Domain Name System (DNS) services, domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other. Route 53 also simplifies the hybrid cloud by providing recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS VPN.

What does AWS Snowball provide?

Secure transfer of large amounts of data into and out of the AWS Cloud - Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Customers today use Snowball to migrate analytics data, genomics data, video libraries, image repositories, backups, and to archive part of data center shutdowns, tape replacement or application migration projects. Transferring data with Snowball is simple, fast, more secure, and can be as little as one-fifth the cost of transferring data via high-speed Internet.

Which of the following can help protect your EC2 instances from DDoS attacks? (Choose two)

Security Groups AND Network Access Control Lists - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. A Network Access Control List (NACL) acts as a firewall for controlling traffic in and out of one or more subnets. Therefore, if they are configured properly, they can protect your instances from DDoS attacks.

Jessica is managing an e-commerce web application in AWS. The application is hosted on six EC2 instances. One day, three of the instances crashed; but none of her customers were affected. What has Jessica done correctly in this scenario?

She has properly built a fault tolerant system - Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some (one or more faults within) of its components. Visitors to a website expect the website to be available irrespective of when they visit. For example, when someone wants to visit Jessica's website to purchase a product, whether it is at 9:00 AM on a Monday or 3:00 PM on holiday, he expects that the website will be available and ready to accept his purchase. Failing to meet these expectations can cause loss of business and contribute to the development of a negative reputation for the website owner, resulting in lost revenue.

You are working on a project that involves creating thumbnails of millions of images; however, consistent uptime is not really an issue, and continuous processing is not required. Which type of EC2 buying option would be the most cost-effective?

Spot Instances - Spot instances provide a discount (up to 90%) off the On-Demand price. The Spot price is determined by long-term trends in supply and demand for EC2 spare capacity. If the Spot price exceeds the maximum price you specify for a given instance or if capacity is no longer available, your instance will automatically be interrupted. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if you don't mind if your applications get interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

When using the AWS TCO tool, what information is required to calculate the potential savings of using AWS vs. on-premises?

The number of on-premise virtual machines - The AWS TCO (Total Cost of Ownership) Calculator provides directional guidance on possible realized savings when using AWS. This tool is built on an underlying calculation model, that generates a fair assessment of value that a customer may achieve given the data provided by the user which includes the number of servers migrated to AWS, the server type, the number of processors per server and so on. The AWS TCO tool only asks you about server and storage configuration details, but if you are going to perform the TCO analysis yourself, you should consider other factors such as cooling and power consumption, data center space, IT labor cost and so on. The AWS TCO tool does not ask you to provide information about your current power and cooling consumption, data center space, IT labor costs. The AWS TCO tool estimates these costs based on specific assumptions for on-premises, co-location, and AWS environments. To understand the TCO tool better, just go to https://awstcocalculator.com/, enter some values for the fields presented, and then click "Calculate TCO" at the bottom.

Which statement is correct with regards to AWS service limits? (Choose two)

You can use the AWS Trusted Advisor to monitor your service limits. AND You can contact AWS support to increase the service limits. - Understanding your service limits (and how close you are to them) is an important part of managing your AWS deployments - continuous monitoring allows you to request limit increases or shut down resources before the limit is reached. One of the easiest ways to do this is via AWS Trusted Advisor's Service Limit Dashboard. AWS maintains service limits for each account to help guarantee the availability of AWS resources, as well as to minimize billing risks for new customers. Some service limits are raised automatically over time as you use AWS, though most AWS services require that you request limit increases manually. Most service limit increases can be requested through the AWS Support Center by choosing Create Case and then choosing Service Limit Increase.

Which of the following is NOT correct regarding Amazon EC2 On-demand instances?

You have to pay a start-up fee when launching a new instance for the first time

What does the "Principle of Least Privilege" refer to?

You should grant your users only the permissions they need when they need them and nothing more. - The principle of least privilege is one of the most important security practices and it means granting users the required permissions to perform the tasks entrusted to them and nothing more. The security administrator determines what tasks users need to perform and then attaches the policies that allow them to perform only those tasks. You should start with a minimum set of permissions and grant additional permissions when necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them down.


Set pelajaran terkait

Chapter 6 Wireless LANS I (other)

View Set

Federal Government | Chapter 4: Supreme Court and 4th Amendment Challenges

View Set