AWS Cloud Computing

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Amazon Quantum Ledger Database (Amazon QLDB)

Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. You can use Amazon QLDB to review a complete history of all the changes that have been made to your application data.

What is Cloud Service Provider (CSP)?

A Cloud Service Provider (CSP) is a company which: - Provides multiple Cloud Services e.g. tens to hundreds of services. - Those Cloud Services can be chained together to create cloud architectures. - Those Cloud Services are accessible via Single Unified API eg. AWS API - Those Cloud Services utilized metered billing based on usage e.g. per second. per hour. - Those Cloud Services have rich monitoring built in eg. AWS CloudTrial. - Those Cloud Services have an Infrastructure as a Service (IaaS) offering. - Those Cloud Services offer automation via Infrastructure as Code (IaC). If a company offers multiple cloud services under a single UI but do not meet most of or all of these requirements, it would be referred to as a Cloud Platform e.g. Twilio, HashiCorp, Databricks.

Common Cloud Services

A cloud service provider can have hundreds of cloud services that are grouped into varous types of services. The 04 most common types of services for IaaS would be: - Compute: Imagine having a cirtual computer that can run application, programs, and code. - Networking: Imagine having virtual network defining internet connections or network isolations between services or outbound to the Internet. - Storage: Virtual hard drive that can store files. - Databases: Virtual database for storing reporting data or a database for general purpose web-application.

Amazon RDS database engines

Amazon RDS is available on six database engines, which optimize for memory, performance, or input/output (I/O). Supported database engines include: Amazon Aurora PostgreSQL MySQL MariaDB Oracle Database Microsoft SQL Server

Using Amazon RDS falls under the shared responsibility model. Which of the following are customer responsibilities? (Choose TWO)

Amazon RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. Since Amazon RDS provides native database access, you interact with the relational database software as you normally would. This means you're still responsible for managing the database settings that are specific to your application. You'll need to build the relational schema that best fits your use case and are responsible for any performance tuning to optimize your database for your application's workflow.

Amazon Redshift

Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.

​ Which AWS Service can perform health checks on Amazon EC2 instances?

Amazon Route 53 provides highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other. Route 53 also offers health checks to monitor the health and performance of your application as well as your web servers and other resources. Route 53 can be configured to route traffic only to the healthy endpoints to achieve greater levels of fault tolerance in your applications.

Amazon S3 Outposts

Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts. It works well for workloads with local data residency requirements that must satisfy demanding performance needs by keeping data close to on-premises applications.

Which of the following can be used to protect data at rest on Amazon S3?

Amazon S3 provides a number of security features for the protection of data at rest, which you can use or not depending on your threat profile: 1- Permissions: Use bucket-level or object-level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure, data integrity compromise or deletion. 2- Versioning: Amazon S3 supports object versions. Versioning is disabled by default. Enable versioning to store a new version for every modified or deleted object from which you can restore compromised objects if necessary. 3- Replication: Although Amazon S3 stores your data across multiple geographically diverse Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. Cross-region replication (CRR) allows you to replicate data between distant AWS Regions to help satisfy these requirements. CRR enables automatic, asynchronous copying of objects across buckets in different AWS Regions. 4- Encryption - server side: Amazon S3 supports server-side encryption of user data. Server-side encryption is transparent to the end user. AWS generates a unique encryption key for each object, and then encrypts the object using AES-256. 5- Encryption - client side: With client-side encryption you create and manage your own encryption keys. Keys you create are not exported to AWS in clear text. Your applications encrypt data before submitting it to Amazon S3, and decrypt data after receiving it from Amazon S3. Data is stored in an encrypted form, with keys and algorithms only known to you.

What is the AWS service that provides a virtual network dedicated to your AWS account?

Amazon Virtual Private Cloud (Amazon VPC) allows you to carve out a portion of the AWS Cloud that is dedicated to your AWS account. Amazon VPC enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

What is the primary storage service used by Amazon RDS database instances?

DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage. EBS volumes are performant for your most demanding workloads, including mission-critical applications such as SAP, Oracle, and Microsoft products. Amazon EBS scales with your performance needs, whether you are supporting millions of gaming customers or billions of e-commerce transactions. A broad range of workloads, such as relational databases (including Amazon RDS databases) and non-relational databases (including Cassandra and MongoDB), enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

Payload

Data contained within a message.

AWS CodeCommit vs. AWS CodeBuild vs. AWS CodeDeploy vs. AWS CodePipeline

- AWS CodeCommit is used to store and version source code. - AWS CodeBuild is used to compile and test source code, helping you find and fix bugs early in the development process when they are easy to fix. - AWS CodeDeploy is used to deploy application code to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. - AWS CodePipeline is the glue that builds these steps together. AWS CodePipeline enables you to automate all phases of your release process, from committing the code into AWS CodeCommit all the way to deploying it with AWS CodeDeploy. You can also integrate your own custom tools into any stage of the release process to form an end-to-end continuous delivery solution. This enables you to deliver new features and updates rapidly and reliably.

AWS Marketplace categories

- Business Applications - Data & Analytics - DevOps - Infrastructure Software - Internet of Things (IoT) - Machine Learning - Migration - Security

Hybrid Deployment

- Connect cloud-based resources to on-premises infrastructure. - Integrate cloud-based resources with legacy IT applications; - With a hybrid deployment, the company would be able to keep the legacy applications on premises while benefiting from the data and analytics services that run in the cloud.

Amazon S3 Standard

- Designed for frequently accessed data - Stores data in a minimum of three Availability Zones - Amazon S3 Standard provides high availability for objects. This makes it a good choice for a wide range of use cases, such as websites, content distribution, and data analytics. Amazon S3 Standard has a higher cost than other storage classes intended for infrequently accessed data and archival storage.

Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)

- Ideal for data with unknown or changing access patterns - Requires a small monthly monitoring and automation fee per object - In the Amazon S3 Intelligent-Tiering storage class, Amazon S3 monitors objects' access patterns. If you haven't accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, Amazon S3 Standard-IA. - If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, Amazon S3 Standard.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

- Ideal for infrequently accessed data - Similar to Amazon S3 Standard but has a lower storage price and higher retrieval price - Amazon S3 Standard-IA is ideal for data infrequently accessed but requires high availability when needed. Both Amazon S3 Standard and Amazon S3 Standard-IA store data in a minimum of three Availability Zones. Amazon S3 Standard-IA provides the same level of availability as Amazon S3 Standard but with a lower storage price and a higher retrieval price.

Amazon Elastic Compute Cloud (Amazon EC2)

- Provides secure, resizable compute capacity in the cloud as Amazon EC2 instances; 1. Launch: First, you launch an instance. Begin by selecting a template with basic configurations for your instance. These configurations include the operating system, application server, or applications. You also select the instance type, which is the specific hardware configuration of your instance. 2. Connect: You can connect to the instance in several ways. Your programs and applications have multiple different methods to connect directly to the instance and exchange data. Users can also connect to the instance by logging in and accessing the computer desktop.; 3. Use: You can run commands to install software, add storage, copy and organize files, and more.

Benefits of cloud computing

-Trade upfront expense for variable expense -Stop spending money to run and maintain data centers -Stop guessing capacity -Benefit from massive economies of scale -Increase speed and agility -Go global in minutes: Applications can be deployed in multiple Regions around the world with a few clicks. This means that you can provide lower latency and a better experience for your customers at a minimal cost.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

- Stores data in a single Availability Zone - Has a lower storage price than Amazon S3 Standard-IA - Compared to Amazon S3 Standard and Amazon S3 Standard-IA, which store data in a minimum of three Availability Zones, Amazon S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply: + You want to save costs on storage. + You can easily reproduce your data in the event of an Availability Zone failure.

On-premises Deployment

- also known as a private cloud deployment; - Deploy resources by using virtualization and resource management tools.; - Increase resource utilization by using application management and virtualization technologies.

Selecting a Region

-Compliance with data governance and legal requirements -Proximity to your customers -Available services within a Region -Pricing

The evolution of Cloud Hosting

1) Dedicated server 2) Virtual private server (VPS) 3) Shared Hosting 4) Cloud Hosting

S3 pricing is based on four factors

1) Total amount of data (in GB) stored on S3 2) Storage class (S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access, S3 One Zone-IA, S3 Glacier, or S3 Glacier Deep Archive) 3) Amount of data transferred out of AWS from S3 4) Number of requests to S3

According to the AWS Well-Architected Framework, which of the following are design principles for operational excellence in the AWS cloud?

1- Perform operations as code: In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events. By performing operations as code, you limit human error and enable consistent responses to events. 2- Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly. Make changes in small increments that can be reversed if they fail (without affecting customers when possible). 3- Refine operations procedures frequently: As you use operations procedures, look for opportunities to improve them. As you evolve your workload, evolve your procedures appropriately. Set up regular game days to review and validate that all procedures are effective and that teams are familiar with them. 4- Anticipate failure: Perform "pre-mortem" exercises to identify potential sources of failure so that they can be removed or mitigated. Test your failure scenarios and validate your understanding of their impact. Test your response procedures to ensure that they are effective, and that teams are familiar with their execution. Set up regular game days to test workloads and team responses to simulated events. 5- Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures. Share what is learned across teams and through the entire organization.

What are the capabilities of AWS X-Ray?

1- Review request behavior: AWS X-Ray traces user requests as they travel through your entire application. It aggregates the data generated by the individual services and resources that make up your application, providing you an end-to-end view of how your application is performing. 2- Discover application issues: With AWS X-Ray, you can glean insights into how your application is performing and discover root causes. With X-Ray's tracing features, you can follow request paths to pinpoint where in your application and what is causing performance issues. 3- Improve application performance AWS X-Ray helps you identify performance bottlenecks. X-Ray's service maps let you see relationships between services and resources in your application in real time. You can easily detect where high latencies are occurring, visualize node and edge latency distribution for services, and then drill down into the specific services and paths impacting application performance.

Amazon EC2 Instance Types

1. General Purpose instances 2. Compute Optimized Instances 3. Memory Optimized Instances 4. Accelerated Computing Instances 5. Storage Optimized Instances

How AWS pricing works

1. Pay for what you use: For each service, you pay for exactly the amount of resources that you actually use, without requiring long-term contracts or complex licensing. 2. Pay less when you reserve: Some services offer reservation options that provide a significant discount compared to On-Demand Instance pricing. For example, suppose that your company is using Amazon EC2 instances for a workload that needs to run continuously. You might choose to run this workload on Amazon EC2 Instance Savings Plans, because the plan allows you to save up to 72% over the equivalent On-Demand Instance capacity. 3. Pay less with volume-based discounts when you use more: Some services offer tiered pricing, so the per-unit cost is incrementally lower with increased usage. For example, the more Amazon S3 storage space you use, the less you pay for it per GB.

Denial-of-service attacks

A denial-of-service (DoS) attack is a deliberate attempt to make a website or application unavailable to users. For example, an attacker might flood a website or application with excessive network traffic until the targeted website or application becomes overloaded and is no longer able to respond. If the website or application becomes unavailable, this denies service to users who are trying to make legitimate requests.

UDP flood attack

A denial-of-service attack based on sending a huge number of UDP packets.

AWS Glue

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

Subnet

A logical subset of a larger network, created by an administrator to improve network performance or to provide security. A subnet is a section of a VPC in which you can group resources based on security or operational needs. Subnets can be public or private.

Server

A server can be services such as Amazon Elastic Compute Cloud (Amazon EC2), a type of virtual server.

A company has created a solution that helps AWS customers improve their architectures on AWS. Which AWS program may support this company?

APN Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers. AWS supports the APN Consulting Partners by providing a wide range of resources and training to support their customers.

AWS Artifact Reports

AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested and verified that AWS is compliant with a variety of global, regional, and industry-specific security standards and regulations. AWS Artifact Reports remains up to date with the latest reports released. You can provide the AWS audit artifacts to your auditors or regulators as evidence of AWS security controls.

Which of the following services will help businesses ensure compliance in AWS?

AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.

What is AWS Cloud?

AWS provides on-demand delivery of technology services through the Internet with pay-as-you-go pricing. This is known as cloud computing. The AWS Cloud encompasses a broad set of global cloud-based products that includes compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications.

How does AWS notify customers about security and privacy events pertaining to AWS services?

AWS publishes security bulletins about the latest security and privacy events with AWS services on the Security Bulletins page.

What are the change management tools that helps AWS customers audit and monitor all resource changes in their AWS environment? (Choose TWO)

AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment Customers can use AWS Config to answer "What did my AWS resource look like?" at a point in time. Customers can use AWS CloudTrail to answer "Who made an API call to modify this resource?" For example, a customer can use the AWS Management Console for AWS Config to detect that the security group "Production-DB" was incorrectly configured in the past. Using the integrated AWS CloudTrail information, they can pinpoint which user misconfigured the "Production-DB" security group. In brief, AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.

A company is planning to use Amazon S3 and Amazon CloudFront to distribute its video courses globally. What tool can the company use to estimate the costs of these services?

AWS Cost Explorer is used to explore and analyze your historical spend and usage. AWS Cost Explorer allows you to have visibility into your consumption patterns, such as, mapping the most commonly used services, and identifying unexpected anomalies or expenses. AWS Cost Explorer can also be used to estimate AWS services costs, but it calculates these estimates based on your previous AWS consumption (meaning AWS Cost Explorer is suitable for existing projects only). In the above scenario, AWS Pricing Calculator is the right choice because it can be used to estimate the costs of both existing and new projects (in our ase, it is a new project). AWS Pricing Calculator enables you to estimate the monthly cost of AWS services for your use case based on your expected usage (not based on previous consumption as is the case with AWS Cost Explorer). For example, if you expect to use 500 GB of S3 Standard storage, you can simply enter this value in the appropriate field and the calculator provides an estimate of your monthly bill.

AWS has created a large number of Edge Locations as part of its Global Infrastructure. Which of the following is NOT a benefit of using Edge Locations?

AWS Edge Locations are not used to distribute traffic. Edge Locations are used in conjunction with the CloudFront service to cache common responses and deliver content to end-users with low latency. With Amazon CloudFront, your users can also benefit from accelerated content uploads. As the data arrives at an edge location, data is routed to AWS storage services over an optimized network path. The AWS service that is used to distribute load is the AWS Elastic Load Balancing (ELB) service.

AWS Elastic Disaster Recovery

AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.

​A company is introducing a new product to their customers, and is expecting a surge in traffic to their web application. As part of their Enterprise Support plan, which of the following provides the company with architectural and scaling guidance?

AWS Infrastructure Event Management is a short-term engagement with AWS Support, included in the Enterprise-level Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS.

Which of the following are true regarding the languages that are supported on AWS Lambda?

AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows customers to use any additional programming languages to author their functions.

Ways to interact with AWS services

AWS Management Console AWS Command Line Interface Software Development Kits

​ Which AWS Service provides integration with Chef to automate the configuration of EC2 instances?

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

Hundreds of thousands of DDoS attacks are recorded every month worldwide. What service does AWS provide to help protect AWS Customers from these attacks?

AWS Shield and AWS WAF

AWS recommends some practices to help organizations avoid unexpected charges on their bill.

AWS will charge the user once the AWS resource is allocated (even if it is not used). Thus, it is advised that once the user's work is completed they should: 1- Delete all Elastic Load Balancers. 2- Terminate all unused EC2 instances. 3- Delete the attached EBS volumes that they don't need. 4- Release any unused Elastic IPs.

What does AWS Snowball provide?

AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers, including high network costs, long transfer times, and security concerns. AWS Customers use Snowball to migrate analytics data, genomics data, video libraries, image repositories, and backups. Transferring data with Snowball is simple, fast, secure, and can cost as little as one-fifth the cost of using high-speed internet. Additionally, With AWS Snowball, you can access the compute power of the AWS Cloud locally and cost-effectively in places where connecting to the internet might not be an option. AWS Snowball is a perfect choice if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments. With AWS Snowball, you have the choice of two devices, Snowball Edge Compute Optimized with more computing capabilities, suited for higher performance workloads, or Snowball Edge Storage Optimized with more storage, which is suited for large-scale data migrations and capacity-oriented workloads. Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It is also a good fit for running general purpose analysis such as IoT data aggregation and transformation. Snowball Edge Compute Optimized is the optimal choice if you need powerful compute and high-speed storage for data processing. Examples include high-resolution video processing, advanced IoT data analytics, and real-time optimization of machine learning models.

AWS WAF

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources

A company is developing a new application using a microservices framework. The new application is having performance and latency issues. Which AWS Service should be used to troubleshoot these issues?

AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application's underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.

According to the AWS Acceptable Use Policy, which of the following statements is true regarding penetration testing of EC2 instances?

AWS customers are welcome to carry out security assessments and penetration tests against their AWS infrastructure without prior approval for 8 services: 1- Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers. 2- Amazon RDS. 3- Amazon CloudFront. 4- Amazon Aurora. 5- Amazon API Gateways. 6- AWS Lambda and Lambda Edge functions. 7- Amazon Lightsail resources. 8- Amazon Elastic Beanstalk environments.

AWS: Security of the Cloud

AWS is responsible for security of the cloud. AWS operates, manages, and controls the components at all layers of infrastructure. This includes areas such as the host operating system, the virtualization layer, and even the physical security of the data centers from which services operate. AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations. AWS manages the security of the cloud, specifically the physical infrastructure that hosts your resources, which include: Physical security of data centers Hardware and software infrastructure Network infrastructure Virtualization infrastructure

When running a workload in AWS, the customer is NOT responsible for

AWS is responsible for the infrastructure security and all data center operations such as racking, stacking, and powering servers, so customers can focus on revenue generating activities rather than on IT infrastructure.

artificial intelligence (AI)

AWS offers a variety of services powered by artificial intelligence (AI). For example, you can perform the following tasks: Convert speech to text with Amazon Transcribe. Discover patterns in text with Amazon Comprehend. Identify potentially fraudulent online activities with Amazon Fraud Detector. Build voice and text chatbots with Amazon Lex.

AWS Support

AWS offers four different Support plans to help you troubleshoot issues, lower costs, and efficiently use AWS services. You can choose from the following Support plans to meet your company's needs: Basic Developer Business Enterprise On-Ramp Enterprise

IDE and IDE Toolkits

AWS offers support for popular Integrated Development Environments (IDEs) and IDE toolkits so you can author, debug, and deploy your code on AWS from within your preferred environment.

Software Development Kits (SDKs)

Allow you to interact with the AWS API programmatically. SDKs come in handy when you want to integrate your application source code with AWS services. For example, you might use the Python SDK to write code to store files in Amazon Simple Storage Service (Amazon S3) instead of on your local hard drive.

Amazon DocumentDB

Amazon DocumentDB is a document database service that supports MongoDB workloads. (MongoDB is a document database program.)

Amazon DynamoDB Accelerator (DAX)

Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. It helps improve response times from single-digit milliseconds to microseconds.

Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demand. By automatically scaling your instances in and out as needed, you are able to maintain a greater sense of application availability. Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling. Dynamic scaling responds to changing demand. Predictive scaling automatically schedules the right number of Amazon EC2 instances based on predicted demand.

Which of the following activities may help reduce your AWS monthly costs? (Choose TWO)

Amazon EC2 Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost. When demand drops, Amazon EC2 Auto Scaling will automatically remove any excess capacity so you avoid overspending. When demand increases, Amazon EC2 Auto Scaling will automatically add capacity to maintain performance. For Amazon S3 and Amazon EFS, you can create a lifecycle policy to automatically move infrequently accessed data to less expensive storage tiers. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers such as Amazon Glacier, or to automatically delete them after a specified duration. Similarly, you can create an Amazon EFS lifecycle policy to automatically move less frequently accessed data to less expensive storage tiers such as Amazon EFS Standard-Infrequent Access (EFS Standard-IA) and Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA). Amazon EFS Infrequent Access storage classes provide price/performance that is cost-optimized for files not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard (EFS Standard) and Amazon EFS One Zone (EFS One Zone) storage classes respectively.

Amazon ElastiCache

Amazon ElastiCache is a service that adds caching layers on top of your databases to help improve the read times of common requests. It supports two types of data stores: Redis and Memcached.

Which of the following services allows you to run containerized applications on a cluster of EC2 instances? (Choose TWO)

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that allows you to use Kubernetes to run and scale containerized applications in the cloud or on-premises. Kubernetes is an open-source container orchestration system that allows you to deploy and manage containerized applications at scale. AWS handles provisioning, scaling, and managing the Kubernetes instances in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.

A company has moved to AWS recently. Which of the following AWS Services will help ensure that they have the proper security settings? (Choose TWO)

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of a detailed assessment report which is available via the Amazon Inspector console or API. To help get started quickly, Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security researchers. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. Like your customized cloud security expert, AWS Trusted Advisor analyzes your AWS environment and provides security recommendations to protect your AWS environment. The service improves the security of your applications by closing gaps, examining permissions, and enabling various AWS security features.

Amazon Managed Blockchain

Amazon Managed Blockchain is a service that you can use to create and manage blockchain networks with open-source frameworks. Blockchain is a distributed ledger system that lets multiple parties run transactions and share data without a central authority.

Amazon Neptune

Amazon Neptune is a graph database service. You can use Amazon Neptune to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.

A customer is planning to migrate their Microsoft SQL Server databases to AWS. Which AWS Services can the customer use to run their Microsoft SQL Server database on AWS?

Amazon Web Services offers the flexibility to run Microsoft SQL Server as either a self-managed component inside of EC2, or as a managed service via Amazon RDS. Using SQL Server on Amazon EC2 gives customers complete control over the database, just like when it's installed on-premises. Amazon RDS is a fully managed service where AWS manages the maintenance, backups, and patching.

Amazon EBS Snapshots

An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved. Incremental backups are different from full backups, in which all the data in a storage volume copies each time a backup occurs. The full backup includes data that has not changed since the most recent backup.

IAM groups

An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.

IAM policies

An IAM policy is a document that allows or denies permissions to AWS services and resources. IAM policies enable you to customize users' levels of access to resources. For example, you can allow users to access all of the Amazon S3 buckets within your AWS account, or only a specific bucket.

What is the difference between an AWS Organizations service control policy (SCP) and an IAM policy?

An IAM policy provides granular control over what users and roles in individual accounts can do. AWS Organizations expands that control to the account level by giving you control over what users and roles in an account or a group of accounts can do. The resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level and the permissions that are explicitly granted by IAM at the user or role level within that account. In other words, the user can access only what is allowed by both the AWS Organizations policies and IAM policies. If either blocks an operation, the user can't access that operation. For example, if an SCP applied to an account states that the only actions allowed are Amazon EC2 actions, and the permissions on a principal (IAM user or role) in the same AWS account allow both EC2 actions and Amazon S3 actions, the principal is able to access only the EC2 actions.

IAM roles

An IAM role is an identity that you can assume to gain temporary access to permissions. Before an IAM user, application, or service can assume an IAM role, they must be granted permissions to switch to the role. When someone assumes an IAM role, they abandon all previous permissions that they had under a previous role and assume the permissions of the new role. IAM roles are ideal for situations in which access to services or resources needs to be granted temporarily, instead of long-term.

IAM users

An IAM user is an identity that you create in AWS. It represents the person or application that interacts with AWS services and resources. It consists of a name and credentials. By default, when you create a new IAM user in AWS, it has no permissions associated with it. To allow the IAM user to perform specific actions in AWS, such as launching an Amazon EC2 instance or creating an Amazon S3 bucket, you must grant the IAM user the necessary permissions.

AWS CLI

An open source tool that enables you to create and configure AWS services using commands in your command-line shell. One benefit of the CLI is that you can create single commands to create multiple AWS resources, which could help reduce the chance of human error when selecting and configuring resources. With the CLI, you need to learn the proper syntax for forming commands, but as you script these commands, you make them repeatable. This should save you time in the long run.

monolithic application

Applications are made of multiple components. The components communicate with each other to transmit data, fulfill requests, and keep the application running. Suppose that you have an application with tightly coupled components. These components might include databases, servers, the user interface, business logic, and so on. This type of architecture can be considered a monolithic application. In this approach to application architecture, if a single component fails, other components fail, and possibly the entire application fails.

Amazon ElastiCache for Redis

Blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications; power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT.

Instance stores

Block-level storage volumes behave like physical hard drives. An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store. Therefore, AWS recommends instance stores for use cases that involve temporary data that you do not need in the long term.

AWS Amplify

Build and deploy mobile and web applications

Benefit from massive economies of scale

By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Technology Overview

CSPs that are IaaS will always have 04 core cloud service offerings: Compute, Storage, Database, and Networking and Content Delivery

Stop spending money running and maintaining data centers

Cloud computing in data centers often requires you to spend more money and time managing infrastructure and servers. A benefit of cloud computing is the ability to focus less on these tasks and more on your applications and customers.

Deployment models for cloud computing

Cloud-based deployment; On-premises deployment; Hybrid Deployment;

Customers: Security in the cloud

Customers are responsible for the security of everything that they create and put in the AWS Cloud. When using AWS services, you, the customer, maintain complete control over your content. You are responsible for managing security requirements for your content, including which content you choose to store on AWS, which AWS services you use, and who has access to that content. You also control how access rights are granted, managed, and revoked. The security steps that you take will depend on factors such as the services that you use, the complexity of your systems, and your company's specific operational and security needs. Steps include selecting, configuring, and patching the operating systems that will run on Amazon EC2 instances, configuring security groups, and managing user accounts.

Developer Support

Customers in the Developer Support plan have access to features such as: - Best practice guidance - Client-side diagnostic tools - Building-block architecture support, which consists of guidance for how to use AWS offerings, features, and services together

A user has opened a "Production System Down" support case to get help from AWS Support after a production system disruption. What is the expected response time for this type of support case?

Customers with AWS Business, Enterprise On-Ramp, or Enterprise support plans can open a "Production System Down" support case. The response time for this type of support case is one hour. Similarly, the response time for the "Business-critical system down" support case is 15 minutes. But, AWS customers must have an Enterprise support plan to be able to open this support case.

Business Support

Customers with a Business Support plan have access to additional features, including: Use-case guidance to identify AWS offerings, features, and services that can best support your specific needs All AWS Trusted Advisor checks Limited support for third-party software, such as common operating systems and application stack components

Memory optimized instances

Designed to deliver fast performance for workloads that process large datasets in memory. In computing, memory is a temporary storage area. It holds all the data and instructions that a central processing unit (CPU) needs to be able to complete actions. Before a computer program or application is able to run, it is loaded from storage into memory. This preloading process gives the CPU direct access to the computer program. Suppose that you have a workload that requires large amounts of data to be preloaded before running an application. This scenario might be a high-performance database or a workload that involves performing real-time processing of a large amount of unstructured data. In these types of use cases, consider using a memory optimized instance. Memory optimized instances enable you to run workloads with high memory needs and receive great performance.

AWS Athena

Interactive query service that makes it easy to analyze data in S3 using standard SQL Serverless, no infrastructure to manage, pay only for queries you run

Which of the following will impact the price paid for an EC2 instance? (Choose TWO)

EC2 instance pricing varies depending on many variables: - The buying option (On-demand, Savings Plans, Reserved, Spot, Dedicated) - Selected instance type - Selected Region - Number of instances - Load balancing - Allocated Elastic IP Addresses Load balancing: The number of hours the Elastic Load Balancer runs and the amount of data it processes contribute to the EC2 monthly cost. Instance type: Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity.

The principle "design for failure and nothing will fail" is very important when designing your AWS Cloud architecture. Which of the following would help adhere to this principle? (Choose TWO)

Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. When designing your AWS Cloud architecture, you should make sure that your system will continue to run even if failures happen. You can achieve this by deploying your AWS resources in multiple Availability zones. Availability zones are isolated from each other; therefore, if one availability zone goes down, the other Availability Zones will still be up and running, and hence your application will be more fault-tolerant. In addition to availability zones, you can build a disaster recovery solution by deploying your AWS resources in other regions. If an entire region goes down, you will still have resources in another region able to continue to provide a solution. Finally, you can use the Elastic Load Balancing service to regularly perform health checks and distribute traffic only to healthy instances.

Which of the following are examples of AWS-Managed Services, where AWS is responsible for the operational and maintenance burdens of running the service?

For managed services such as Amazon Elastic MapReduce (Amazon EMR) and DynamoDB, AWS is responsible for performing all the operations needed to keep the service running. Amazon EMR launches clusters in minutes. You don't need to worry about node provisioning, infrastructure setup, Hadoop configuration, or cluster tuning. Amazon EMR takes care of these tasks so you can focus on analysis. DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities. Other managed services include: AWS Lambda, Amazon RDS, Amazon Redshift, Amazon CloudFront, Amazon S3 and several other services. For these managed services, AWS is responsible for most of the configuration and management tasks, but customers are still responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

What should you do if you see resources, which you don't remember creating, in the AWS Management Console?

If you suspect that your account has been compromised, or if you have received a notification from AWS that the account has been compromised, perform the following tasks: 1- Change your AWS root account password and the passwords of all IAM users. 2- Delete or rotate all root and AWS Identity and Access Management (IAM) access keys. 3- Delete any potentially compromised IAM users. 4- Delete any resources on your account you didn't create, such as EC2 instances and AMIs, EBS volumes and snapshots, and IAM users. 5- Respond to any notifications you received from AWS Support through the AWS Support Center.

Resource Groups

If you work with multiple resources in multiple environments, you might find it useful to manage all the resources in each environment as a group rather than move from one AWS service to another for each task. Resource Groups help you do just that. By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates information based on your project and the resources that you use.

AWS Budgets

In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance reservations. The information in AWS Budgets updates three times a day. This helps you to accurately determine how close your usage is to your budgeted amounts or to the AWS Free Tier limits. In AWS Budgets, you can also set custom alerts when your usage exceeds (or is forecasted to exceed) the budgeted amount.

Service Control Policies (SCPs)

In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.

Organizational units

In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to manage accounts with similar business or security requirements. When you apply a policy to an OU, all the accounts in the OU automatically inherit the permissions specified in the policy.

A startup company is operating on limited funds and is extremely concerned about cost overruns. Which of the below options can be used to notify the company when their monthly AWS bill exceeds $2000?

In CloudWatch, you can set up a billing alarm that triggers if your costs exceed a threshold that you set. This CloudWatch alarm can also be configured to trigger an SNS notification to your email address. AWS Budgets is another AWS service that can be used in this scenario. AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. The difference between AWS Budgets and Amazon CloudWatch billing alarms is that Amazon CloudWatch billing alarms alert you only when your actual cost exceeds a certain threshold, while AWS Budgets can be configured to alert you when the actual or forecasted cost exceeds a certain threshold.

What are the Amazon RDS features that can be used to improve the availability of your database? (Choose TWO)

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption. Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.

Increase speed and agility

In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

Distributed denial-of-service attacks

In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that aims to make a website or application unavailable. This can come from a group of attackers, or even a single attacker. The single attacker can use multiple infected computers (also known as "bots") to send excessive traffic to a website or application.

microservices

In a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing. When designing applications on AWS, you can take a microservices approach with services and components that fulfill different functions. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).

Nonrelational databases

In a nonrelational database, you create tables. A table is a place where you can store and query data. Nonrelational databases are sometimes referred to as "NoSQL databases" because they use structures other than rows and columns to organize data. One type of structural approach for nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys), and items have attributes (values). You can think of attributes as being different features of your data. In a key-value database, you can add or remove attributes from items in the table at any time. Additionally, not every item in the table has to have the same attributes.A highly scalable and highly available database type that is designed to store unstructured data. Also called NoSQL database.

relational database

In a relational database, data is stored in a way that relates it to other pieces of data. Relational databases use structured query language (SQL) to store and query data. This approach allows data to be stored in an easily understandable, consistent, and scalable way.

Enterprise Support

In addition to all features included in the Basic, Developer, Business, and Enterprise On-Ramp support plans, customers with Enterprise Support have access to: - A designated Technical Account Manager to provide proactive guidance and coordinate access to programs and AWS experts - A Concierge support team for billing and account assistance - Operations Reviews and tools to monitor health - Training and Game Days to drive innovation - Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard The Enterprise plan also provides full access to proactive services, which are provided by a designated Technical Account Manager: - Consultative review and architecture guidance - Infrastructure Event Management support - Cost Optimization Workshop and tools - Support automation workflows - 15 minutes or less response time for business-critical issues

AWS Enterprise On-Ramp Support

In addition to all the features included in the Basic, Developer, and Business Support plans, customers with an Enterprise On-Ramp Support plan have access to: - A pool of Technical Account Managers to provide proactive guidance and coordinate access to programs and AWS experts - A Cost Optimization workshop (one per year) - A Concierge support team for billing and account assistance - Tools to monitor costs and performance through Trusted Advisor and Health API/Dashboard Enterprise On-Ramp Support plan also provides access to a specific set of proactive support services, which are provided by a pool of Technical Account Managers. - Consultative review and architecture guidance (one per year) - Infrastructure Event Management support (one per year) Support automation workflows - 30 minutes or less response time for business-critical issues

Client

In computing, a client can be a web browser or desktop application that a person interacts with to make requests to computer servers.

File Storage

In file storage, multiple clients (such as users, applications, servers, and so on) can access data that is stored in shared file folders. In this approach, a storage server uses block storage with a local file system to organize files. Clients access data through file paths. Compared to block storage and object storage, file storage is ideal for use cases in which a large number of services and resources need to access the same data at the same time.

Six core perspectives of the Cloud Adoption Framework

In general, the Business, People, and Governance Perspectives focus on business capabilities, whereas the Platform, Security, and Operations Perspectives focus on technical capabilities.

Object Storage

In object storage, each object consists of data, metadata, and a key. The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object's key is its unique identifier. Recall that when you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.

A company has an AWS Enterprise Support plan. They want quick and efficient guidance with their billing and account inquiries. Which of the following should the company use?

Included as part of the Enterprise Support plan, the Support Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. The Concierge team will quickly and efficiently assist you with your billing and account inquiries, and work with you to help implement billing and account best practices so that you can focus on running your business. Support Concierge service includes: ** 24 x7 access to AWS billing and account inquires. ** Guidance and best practices for billing allocation, reporting, consolidation of accounts, and root-level account security. ** Access to Enterprise account specialists for payment inquiries, training on specific cost reporting, assistance with service limits, and facilitating bulk purchases.

Amazon S3 Glacier Flexible Retrieval

Low-cost storage designed for data archiving Able to retrieve objects within a few minutes to hours Amazon S3 Glacier Flexible Retrieval is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files.

Amazon S3 Deep Archive

Lowest-cost object storage class ideal for archiving Able to retrieve objects within 12 hours Amazon S3 Deep Archive supports long-term retention and digital preservation for data that might be accessed once or twice in a year. This storage class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48 hours. All objects from this storage class are replicated and stored across at least three geographically dispersed Availability Zones.

Stateless packet filtering

Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound. When a packet response for that request comes back to the subnet, the network ACL does not remember your previous request. The network ACL checks the packet response against its list of rules to determine whether to allow or deny.

According to best practices, which of the below options is best suited for processing a large number of binary files?

One of the core principles of the AWS Well-Architected Framework is that of scaling horizontally. Horizontal scaling means adding several smaller instances when workloads increase, instead of adding additional CPU, memory, or disk capacity to a single instance. In the syntax of this question, running several EC2 instances in parallel achieves horizontal scalability and is the correct answer. AWS recommends that customers should scale resources horizontally to increase aggregate system availability. Replacing a large resource with multiple small resources in parallel will reduce the impact of a single failure on the overall system. For example, if a customer wants to convert a large number of binary files to text files or transcode a large number of video files to another format, it is recommended that they use multiple EC2 instances in parallel instead of using one large instance.

Reserved Instances

Provide you with a capacity reservation, and offers discount on the hourly charge for an instance. 1 Year or 3 Year terms

AWS PrivateLink

Provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network

Refactoring/re-architecting

Refactoring (also known as re-architecting) involves reimagining how an application is architected and developed by using cloud-native features. Refactoring is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application's existing environment.

Rehosting Migration Strategy

Rehosting also known as "lift-and-shift" involves moving applications without changes. In the scenario of a large legacy migration, in which the company is looking to implement its migration and scale quickly to meet a business case, the majority of applications are rehosted.

Replatforming "lift, tinker and shift"

Replatforming, also known as "lift, tinker, and shift," involves making a few cloud optimizations to realize a tangible benefit. Optimization is achieved without changing the core architecture of the application.

Repurchasing

Repurchasing involves moving from a traditional license to a software-as-a-service model. For example, a business might choose to implement the repurchasing strategy by migrating from a customer relationship management (CRM) system to Salesforce.com.

Retaining

Retaining consists of keeping applications that are critical for the business in the source environment. This might include applications that require major refactoring before they can be migrated, or, work that can be postponed until a later time.

Retiring

Retiring is the process of removing applications that are no longer needed.

A company is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to a minimum?

Right-sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. By right-sizing before migration, you can significantly reduce your infrastructure costs. If you skip right-sizing to save time, your migration speed might be faster, but you will end up with higher cloud infrastructure spend for a potentially long time. Because your resource needs are always changing, right-sizing must become an ongoing process to continually achieve cost optimization. It's important to right-size when you first consider moving to the cloud and calculate the total cost of ownership. However, it's equally important to right-size periodically once you're in the cloud to ensure ongoing cost-performance optimization. Picking an Amazon RDS instance for a given workload means finding the instance family that most closely matches the CPU, disk I/O, and memory needs of your workload. Amazon RDS provides a wide selection of instances, which gives you lots of flexibility to right-size your resources to match capacity needs at the lowest cost.

AWS Outposts

Run AWS services on-premises

Cloud-based deployment

Run all parts of the application in the cloud. Migrate existing applications to the cloud. Design and build new applications in the cloud.

Stateful packet filtering

Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets. Consider the same example of sending a request out from an Amazon EC2 instance to the internet. When a packet response for that request returns to the instance, the security group remembers your previous request. The security group allows the response to proceed, regardless of inbound security group rules.

AWS CodeDeploy

Service to automate code deployments to EC2 instances Allows you to deploy reliably and rapidly Release new features rapidly and avoid downtime during deployment

Select TWO examples of the AWS shared controls.

Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include: ** Patch Management - AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications. ** Configuration Management - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications. ** Awareness & Training - AWS trains AWS employees, but a customer must train their own employees. Additional information: A computer on which AWS runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. AWS drives the concept of virtualization by allowing the physical host machine to operate multiple virtual machines as guests (for multiple customers) to help maximize the effective use of computing resources such as memory, network bandwidth and CPU cycles.

​ What are AWS shared controls?

Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include: - Patch Management - AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications. - Configuration Management - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications. - Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.

Trials

Short-term free trial offers start from the date you activate a particular service. The length of each trial might vary by number of days or the amount of usage in the service. For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables you to run virtual private servers) offers 750 free hours of usage over a 30-day period.

Domain Name System (DNS)

Suppose that AnyCompany has a website hosted in the AWS Cloud. Customers enter the web address into their browser, and they are able to access the website. This happens because of Domain Name System (DNS) resolution. DNS resolution involves a customer DNS resolver communicating with a company DNS server. You can think of DNS as being the phone book of the internet. DNS resolution is the process of translating a domain name to an IP address.

How Amazon Route 53 and Amazon CloudFront deliver content

Suppose that AnyCompany's application is running on several Amazon EC2 instances. These instances are in an Auto Scaling group that attaches to an Application Load Balancer. 1 A customer requests data from the application by going to AnyCompany's website. 2 Amazon Route 53 uses DNS resolution to identify AnyCompany.com's corresponding IP address, 192.0.2.0. This information is sent back to the customer. 3 The customer's request is sent to the nearest edge location through Amazon CloudFront. 4 Amazon CloudFront connects to the Application Load Balancer, which sends the incoming packet to an Amazon EC2 instance.

AWS Organizations

Suppose that your company has multiple AWS accounts. You can use AWS Organizations to consolidate and manage multiple AWS accounts within a central location. When you create an organization, AWS Organizations automatically creates a root, which is the parent container for all the accounts in your organization. In AWS Organizations, you can centrally control permissions for the accounts in your organization by using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services, resources, and individual API actions that users and roles in each account can access.

AWS Artifact Agreements

Suppose that your company needs to sign an agreement with AWS regarding your use of certain types of information throughout AWS services. You can do this through AWS Artifact Agreements. In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations. Different types of agreements are offered to address the needs of customers who are subject to specific regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).

Cloud Computing

The on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

You have AWS Basic support, and you have discovered that some AWS resources are being used maliciously, and those resources could potentially compromise your data. What should you do?

The AWS Abuse team can assist you when AWS resources are being used to engage in the following types of abusive behavior: I. Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are being used to spam websites or forums. II. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server, and you believe this is an attempt to discover unsecured ports. III. Denial of service attacks (DOS): Your logs show that one or more AWS-owned IP addresses are being used to flood ports on your resources with packets, and you believe this is an attempt to overwhelm or crash your server or software running on your server. IV. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are being used to attempt to log in to your resources. V. Hosting objectionable or copyrighted content: You have evidence that AWS resources are being used to host or distribute illegal content or distribute copyrighted content without the consent of the copyright holder. VI. Distributing malware: You have evidence that AWS resources are being used to distribute software that was knowingly created to compromise or cause harm to computers or machines on which it is installed.

Go global in minutes

The AWS Cloud global footprint enables you to quickly deploy applications to customers around the world, while providing them with low latency.

What does the AWS Health Dashboard provide?

The AWS Health Dashboard (previously AWS Personal Health Dashboard) is the single place to learn about the availability and operations of AWS services. You can view the overall status of all AWS services, and you can sign in to access a personalized view of the health of the specific services that are powering your workloads and applications. AWS Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance. The benefits of the AWS Health Dashboard include: **A personalized View of Service Health: Personal Health Dashboard gives you a personalized view of the status of the AWS services that power your applications, enabling you to quickly see when AWS is experiencing issues that may impact you. For example, in the event of a lost EBS volume associated with one of your EC2 instances, you would gain quick visibility into the status of the specific service you are using, helping save precious time troubleshooting to determine root cause. **Proactive Notifications: The dashboard also provides forward looking notifications, and you can set up alerts across multiple channels, including email and mobile notifications, so you receive timely and relevant information to help plan for scheduled changes that may affect you. In the event of AWS hardware maintenance activities that may impact one of your EC2 instances, for example, you would receive an alert with information to help you plan for, and proactively address any issues associated with the upcoming change. **Detailed Troubleshooting Guidance: When you get an alert, it includes remediation details and specific guidance to enable you to take immediate action to address AWS events impacting your resources. For example, in the event of an AWS hardware failure impacting one of your EBS volumes, your alert would include a list of your affected resources, a recommendation to restore your volume, and links to the steps to help you restore it from a snapshot. This targeted and actionable information reduces the time needed to resolve issues.

AWS Snow Family

The AWS Snow Family is a collection of physical devices that help to physically transport up to exabytes of data into and out of AWS. AWS Snow Family is composed of AWS Snowcone, AWS Snowball, and AWS Snowmobile.

Which security resources are available to any user for free?

The AWS free security resources include the AWS Security Blog, Whitepapers, AWS Developer Forums, Articles and Tutorials, Training, Security Bulletins, Compliance Resources and Testimonials.

Business Perspective

The Business Perspective ensures that IT aligns with business needs and that IT investments link to key business results. Use the Business Perspective to create a strong business case for cloud adoption and prioritize cloud adoption initiatives. Ensure that your business strategies and goals align with your IT strategies and goals. Common roles in the Business Perspective include: Business managers Finance managers Budget owners Strategy stakeholders

Governance Perspective

The Governance Perspective focuses on the skills and processes to align IT strategy with business strategy. This ensures that you maximize the business value and minimize risks. Use the Governance Perspective to understand how to update the staff skills and processes necessary to ensure business governance in the cloud. Manage and measure cloud investments to evaluate business outcomes. Common roles in the Governance Perspective include: Chief Information Officer (CIO) Program managers Enterprise architects Business analysts Portfolio managers

Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose TWO)

The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA).

Operations Perspective

The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the level agreed upon with your business stakeholders. Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with and support the operations of the business. The AWS CAF helps these stakeholders define current operating procedures and identify the process changes and training needed to implement successful cloud adoption. Common roles in the Operations Perspective include: IT operations managers IT support managers

People Perspective

The People Perspective supports development of an organization-wide change management strategy for successful cloud adoption. Use the People Perspective to evaluate organizational structures and roles, new skill and process requirements, and identify gaps. This helps prioritize training, staffing, and organizational changes. Common roles in the People Perspective include: Human resources Staffing People managers

Platform Perspective

The Platform Perspective includes principles and patterns for implementing new solutions on the cloud, and migrating on-premises workloads to the cloud. Use a variety of architectural models to understand and communicate the structure of IT systems and their relationships. Describe the architecture of the target state environment in detail. Common roles in the Platform Perspective include: Chief Technology Officer (CTO) IT managers Solutions architects

Security Perspective

The Security Perspective ensures that the organization meets security objectives for visibility, auditability, control, and agility. Use the AWS CAF to structure the selection and implementation of security controls that meet the organization's needs. Common roles in the Security Perspective include: Chief Information Security Officer (CISO) IT security managers IT security analysts

Technical Account Manager (TAM)

The TAM is your primary point of contact at AWS. If your company subscribes to Enterprise Support or Enterprise On-Ramp, your TAM educates, empowers, and evolves your cloud journey across the full range of AWS services. TAMs provide expert engineering guidance, help you design solutions that efficiently integrate AWS services, assist with cost-effective and resilient architectures, and provide direct access to AWS programs and a broad community of experts.For example, suppose that you are interested in developing an application that uses several AWS services together. Your TAM could provide insights into how to best use the services together. They achieve this, while aligning with the specific needs that your company is hoping to address through the new application.

Consolidated Billing

The consolidated billing feature of AWS Organizations enables you to receive a single bill for all AWS accounts in your organization. By consolidating, you can easily track the combined costs of all the linked accounts in your organization. The default maximum number of accounts allowed for an organization is 4, but you can contact AWS Support to increase your quota, if needed. On your monthly bill, you can review itemized charges incurred by each account. This enables you to have greater transparency into your organization's accounts while still maintaining the convenience of receiving a single monthly bill. Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and Reserved Instances across the accounts in your organization. For instance, one account might not have enough monthly usage to qualify for discount pricing. However, when multiple accounts are combined, their aggregated usage may result in a benefit that applies across all accounts in the organization.

Cost optimization

The cost optimization pillar focuses on avoiding unnecessary costs. Key topics include understanding spending over time and controlling fund allocation, selecting resources of the right type and quantity, and scaling to meet business needs without overspending. - Implement Cloud Financial Management; - Adopt a consumption model; - Measure overall efficiency; - Stop spending money on undifferentiated heavy lifting; - Analyze and attribute expenditure;

Which of the following has the greatest impact on cost?

The factors that have the greatest impact on cost include: Compute, Storage and Data Transfer Out. Their pricing differs according to the service you use.

Which of the following are factors in determining the appropriate database technology to use for a specific workload?

The following questions can help you take decisions on which solutions to include in your architecture: - Is this a read-heavy, write-heavy, or balanced workload? How many reads and writes per second are you going to need? How will those values change if the number of users increases? - How much data will you need to store and for how long? How quickly do you foresee this will grow? Is there an upper limit in the foreseeable future? What is the size of each object (average, min, max)? How are these objects going to be accessed? - What are the requirements in terms of durability of data? Is this data store going to be your "source of truth"? - What are your latency requirements? How many concurrent users do you need to support? - What is your data model and how are you going to query the data? Are your queries relational in nature (e.g.,JOINs between multiple tables)? Could you denormalize your schema to create flatter data structures that are easier to scale? - What kind of functionality do you require? Do you need strong integrity controls or are you looking for more flexibility (e.g.,schema-less data stores)? Do you require sophisticated reporting or search capabilities? Are your developers more familiar with relational databases than NoSQL?

A Japanese company hosts their applications on Amazon EC2 instances in the Tokyo Region. The company has opened new branches in the United States, and the US users are complaining of high latency. What can the company do to reduce latency for the users in the US while minimizing costs?

The only way to reduce latency for the US users is to provision new Amazon EC2 instances in a Region closer to or in the US, OR by using Amazon CloudFront to cache copies of the content in edge locations close to the US users. In both cases, user requests will travel a shorter distance over the network, and the performance will improve.

Operational Excellence Pillar

The operational excellence pillar focuses on running and monitoring systems, and continually improving processes and procedures. Key topics include automating changes, responding to events, and defining standards to manage daily operations. - Perform operations as code; - Make frequent, small, reversible changes; - Refine operations procedures frequently; - Anticipate failure; - Learn from operational failures;

Performance efficiency

The performance efficiency pillar focuses on structured and streamlined allocation of IT and computing resources. Key topics include selecting resource types and sizes optimized for workload requirements, monitoring performance, and maintaining efficiency as business needs evolve. Best Practices: - Democratize advanced technologies; - Go Global in minutes; - Use severless architecture; - Experiment more often; - Consider mechanical sympathy;

What is cloud computing?

The practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. - On Premise: + You own the server; + You hire the IT people; + You pay or rent the real estate; + You take all the risk; - Cloud Providers: + Someone else owns the servers; + Someone else hires the IT people; + Someone else pays or rents the real-estate;

Reliability Pillar

The reliability pillar focuses on workloads performing their intended functions and how to recover quickly from failure to meet demands. Key topics include distributed system design, recovery planning, and adapting to changing requirements. Best practices: - Automatically recover from failure; - Test recovery procedures; - Scale horizontally to increase aggregate workfload availability; - Stop guessing capacity; - Manage change in automation;

Security Pillar

The security pillar focuses on protecting information and systems. Key topics include confidentiality and integrity of data, managing user permissions, and establishing controls to detect security events. Best Practices are: - Implement a strong identity foundation; - Enable tracebility; - Apply security at all layers; - Automate security best practices; - Protect data in transit and ar rest; - Keep people away from data; - Prepare for security events;

shared responsibility model

The shared responsibility model divides into customer responsibilities (commonly referred to as "security in the cloud") and AWS responsibilities (commonly referred to as "security of the cloud"). You can think of this model as being similar to the division of responsibilities between a homeowner and a homebuilder. The builder (AWS) is responsible for constructing your house and ensuring that it is solidly built. As the homeowner (the customer), it is your responsibility to secure everything in the house by ensuring that the doors are closed and locked.

Sustainability

The sustainability pillar focuses on minimizing the environmental impacts of running cloud workloads. Key topics include a shared responsibility model for sustainability, understanding impact, and maximizing utilization to minimize required resources and reduce downstream impacts.

Serverless Computing

The term "serverless" means that your code runs on servers, but you do not need to provision or manage these servers. With serverless computing, you can focus more on innovating new products and features instead of maintaining servers. Another benefit of serverless computing is the flexibility to scale serverless applications automatically. Serverless computing can adjust the applications' capacity by modifying the units of consumptions, such as throughput and memory.

Virtual private gateway

To access private resources in a VPC, you can use a virtual private gateway. The virtual private gateway is the component that allows protected internet traffic to enter into the VPC. Even though your connection to the coffee shop has extra protection, traffic jams are possible because you're using the same road as other customers. A virtual private gateway allows traffic into the VPC only if it is coming from an approved network. You can think of the internet as the road between your home and the coffee shop. Suppose that you are traveling on this road with a bodyguard to protect you. You are still using the same road as other customers, but with an extra layer of protection. The bodyguard is like a virtual private network (VPN) connection that encrypts (or protects) your internet traffic from all the other requests around it.

Which design principles relate to performance efficiency in AWS?

There are five design principles for performance efficiency in the cloud: 1- Democratize advanced technologies: Technologies that are difficult to implement can become easier to consume by pushing that knowledge and complexity into the cloud vendor's domain. Rather than having your IT team learns how to host and run a new technology, they can simply consume it as a service. For example, NoSQL databases, media transcoding, and machine learning are all technologies that require expertise that is not evenly dispersed across the technical community. In the cloud, these technologies become services that your team can consume while focusing on product development rather than resource provisioning and management. 2- Go global in minutes: Easily deploy your system in multiple Regions around the world with just a few clicks. This allows you to provide lower latency and a better experience for your customers at minimal cost. 3- Use serverless architectures: In the cloud, serverless architectures remove the need for you to run and maintain servers to carry out traditional compute activities. For example, storage services can act as static websites, removing the need for web servers, and event services can host your code for you. This not only removes the operational burden of managing these servers, but also can lower transactional costs because these managed services operate at cloud scale. 4- Experiment more often: With virtual and automatable resources, you can quickly carry out comparative testing using different types of instances, storage, or configurations. 5- Mechanical sympathy: Use the technology approach that aligns best to what you are trying to achieve. For example, consider data access patterns when selecting database or storage approaches.

What are two advantages of using Cloud Computing over using traditional data centers? (Choose TWO)

These are things that traditional web hosting cannot provide: **High-availability (eliminating single points of failure): A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. The best way to understand and avoid the single point of failure is to begin by making a list of all major points of your architecture. You need to break the points down and understand them further. Then, review each of these points and think what would happen if any of these failed. AWS gives you the opportunity to automate recovery and reduce disruption at every layer of your architecture. Additionally, AWS provides fully managed services that enable customers to offload the administrative burdens of operating and scaling the infrastructure to AWS so that they don't have to worry about high availability or Single Point of Failures. For example, AWS Lambda and DynamoDB are serverless services; there are no servers to provision, patch, or manage and no software to install, maintain, or operate. Availability and fault tolerance are built-in, eliminating the need to architect your applications for these capabilities. **Distributed infrastructure: The AWS Cloud operates in over 75 Availability Zones within over 20 geographic Regions around the world, with announced plans for more Availability Zones and Regions, allowing you to reduce latency to users from all around the world. **On-demand infrastructure for scaling applications or tasks: AWS allows you to provision the required resources for your application in minutes and also allows you to stop them when you don't need them. **Cost savings: You don't have to run your own data center for internal or private servers, so your IT department doesn't have to make bulk purchases of servers which may never get used, or may be inadequate. The "pay as you go" model from AWS allows you to pay only for what you use and the ability to scale down to avoid over-spending. With AWS you don't have to pay an entire IT department to maintain that hardware -- you don't even have to pay an accountant to figure out how much hardware you can afford or how much you need to purchase.

12 Months Free

These offers are free for 12 months following your initial sign-up date to AWS. Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.

Always Free

These offers do not expire and are available to all AWS customers. For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.

CloudTrail Insights

This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.

Landscape of CSPs

Tier-1 (Top Tier) - Early to market, wide offering, strong synergies between services, well recognized in the industry AWS, Microsoft Azure, Google Cloud Platform (GCP), Alibaba Cloud Tier-2 (Mid Tier) - Backed by well-know tech companies, slow to innovate and turned to specialization. IBM Cloud, Oracle Cloud, Rackspace (OpenStack) Tier-3 (Light Tier) - Virtual Private Servers (VPS) turned to offer core IaaS offering. Simple, cost-effective Vultr, Digital Ocean, Linode

Internet gateway

To allow public traffic from the internet to access your VPC, you attach an internet gateway to the VPC. An internet gateway is a connection between a VPC and the internet. You can think of an internet gateway as being similar to a doorway that customers use to enter the coffee shop. Without an internet gateway, no one can access the resources within your VPC.

Advantages of cloud computing

Trade upfront expense for variable expense. Benefit from massive economies of scale. Stop guessing capacity. Increase speed and agility. Stop spending money running and maintaining data centers. Go global in minutes.

Machine Learning (ML)

Traditional machine learning (ML) development is complex, expensive, time consuming, and error prone. AWS offers Amazon SageMaker to remove the difficult work from the process and empower you to build, train, and deploy ML models quickly. You can use ML to analyze data, solve complex problems, and predict outcomes before they happen.

Under the shared responsibility model, which of the following is the responsibility of AWS?

Under the shared responsibility model, AWS is responsible for the hardware and software that run AWS services. This includes patching the infrastructure software and configuring infrastructure devices. As a customer, you are responsible for implementing best practices for data encryption, patching guest operating system and applications, identity and access management, and network & firewall configurations.

Trade upfront expense for variable expense

Upfront expenses include data centers, physical servers, and other resources that you would need to invest in before using computing resources. Instead of investing heavily in data centers and servers before you know how you're going to use them, you can pay only when you consume computing resources.

Accelerated computing instances

Use hardware accelerators, or coprocessors, to perform some functions more efficiently than is possible in software running on CPUs. Examples of these functions include floating-point number calculations, graphics processing, and data pattern matching. In computing, a hardware accelerator is a component that can expedite data processing. Accelerated computing instances are ideal for workloads such as graphics applications, game streaming, and application streaming.

AWS Billing & Cost Management dashboard

Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and analyze and control your costs. - Compare your current month-to-date balance with the previous month, and get a forecast of the next month based on current usage. - View month-to-date spend by service. - View Free Tier usage by service. - Access Cost Explorer and create budgets. - Purchase and manage Savings Plans. - Publish AWS Cost and Usage Reports.

Packet

When a customer requests data from an application hosted in the AWS Cloud, this request is sent as a packet. A packet is a unit of data sent over the internet or a network. It enters into a VPC through an internet gateway. Before a packet can enter into a subnet or exit from a subnet, it checks for permissions. These permissions indicate who sent the packet and how the packet is trying to communicate with the resources in a subnet. The VPC component that checks packet permissions for subnets is a network access control list (ACL).

Data security is one of the top priorities of AWS. How does AWS deal with old storage devices that have reached the end of their useful life?

When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. AWS uses specific techniques to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

AWS Management Console

When first getting started with AWS, people often begin with the AWS Management Console, a web-based console that you log in to through a browser. The console comprises a broad collection of service consoles for managing AWS resources. By working in the console, you do not need to worry about scripting or syntax. You can also select the specific Region you want an AWS service to be in.

How to interact with AWS?

When infrastructure becomes virtual, as with cloud computing, the way developers work with infrastructure changes slightly. Instead of physically managing infrastructure, you logically manage it, through the AWS Application Programming Interface (AWS API). When you create, delete, or change any AWS resource, you will use API calls to AWS to do that. You can make these API calls in several ways, but we will focus on these to introduce this topic: 1. The AWS Management Console 2. The AWS Command Line Interface (AWS CLI) 3. IDE and IDE toolkits 4. AWS Software Development Kits (SDKs)

6 strategies for migration

When migrating applications to the cloud, six of the most common migration strategies that you can implement are: Rehosting Replatforming Refactoring/re-architecting Repurchasing Retaining Retiring

AWS Trusted Advisor dashboard

When you access the Trusted Advisor dashboard on the AWS Management Console, you can review completed checks for cost optimization, performance, security, fault tolerance, and service limits. For each category: -The green check indicates the number of items for which it detected no problems. -The orange triangle represents the number of recommended investigations. -The red circle represents the number of recommended actions.

AWS account root user

When you first create an AWS account, you begin with an identity known as the root user. The root user is accessed by signing in with the email address and password that you used to create your AWS account. You can think of the root user as being similar to the owner of the coffee shop. It has complete access to all the AWS services and resources in the account.

AWS CloudFormation

With AWS CloudFormation, you can treat your infrastructure as code. This means that you can build an environment by writing lines of code instead of using the AWS Management Console to individually provision resources. AWS CloudFormation provisions your resources in a safe, repeatable manner, enabling you to frequently build your infrastructure and applications without having to perform manual actions. It determines the right operations to perform when managing your stack and rolls back changes automatically if it detects errors.

AWS Elastic Beanstalk

With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk deploys the resources necessary to perform the following tasks: - Adjust capacity; - Load balancing; - Automatic scaling; - Application health monitoring;

Serverless applications

With AWS, serverless refers to applications that don't require you to provision, maintain, or administer servers. You don't need to worry about fault tolerance or availability. AWS handles these capabilities for you. AWS Lambda is an example of a service that you can use to run serverless applications. If you design your architecture to trigger Lambda functions to run your code, you can bypass the need to manage a fleet of servers. Building your architecture with serverless applications enables your developers to focus on their core product instead of managing and operating servers.

CloudWatch alarms

With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold. When configuring the alarm, you can specify to receive a notification whenever this alarm is triggered.

Stop guessing capacity

With cloud computing, you don't have to predict how much infrastructure capacity you will need before deploying an application. For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances when needed and pay only for the compute time you use. Instead of paying for resources that are unused or dealing with limited capacity, you can access only the capacity that you need, and scale in or out in response to demand.

Amazon S3 Glacier Instant Retrieval

Works well for archived data that requires immediate access Can retrieve objects within a few milliseconds When you decide between the options for archival storage, consider how quickly you must retrieve the archived objects. You can retrieve objects stored in the Amazon S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as Amazon S3 Standard.

Which of the following can be used to enable the Virtual Multi-Factor Authentication?

You can use either the AWS IAM console or the AWS CLI to enable a virtual MFA device for an IAM user in your account.

In order to implement best practices when dealing with a "Single Point of Failure," you should attempt to build as much automation as possible in both detecting and reacting to failure. Which of the following AWS services would help? (Choose TWO)

You should attempt to build as much automation as possible in both detecting and reacting to failure. You can use services like ELB and Amazon Route53 to configure health checks and mask failure by only routing traffic to healthy endpoints. In addition, Auto Scaling can be configured to automatically replace unhealthy nodes. You can also replace unhealthy nodes using the Amazon EC2 auto-recovery feature or services such as AWS OpsWorks and AWS Elastic Beanstalk. It won't be possible to predict every possible failure scenario on day one. Make sure you collect enough logs and metrics to understand normal system behavior. After you understand that, you will be able to set up alarms that trigger automated response or manual intervention.

Amazon Route 53

a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications hosted in AWS. Amazon Route 53 connects user requests to infrastructure running in AWS (such as Amazon EC2 instances and load balancers). It can route users to infrastructure outside of AWS. Another feature of Route 53 is the ability to manage the DNS records for domain names. You can register new domain names directly in Route 53. You can also transfer DNS records for existing domain names managed by other domain registrars. This enables you to manage all of your domain names within a single location.

Amazon Connect

a cloud-based contact center solution. Amazon Connect makes it easy to set up and manage a customer contact center and provide reliable customer engagement at any scale. You can set up a contact center in just a few steps, add agents from anywhere, and start to engage with your customers right away. Amazon Connect provides rich metrics and real-time reporting that allow you to optimize contact routing. You can also resolve customer issues more efficiently by putting customers in touch with the right agents. Amazon Connect integrates with your existing systems and business applications to provide visibility and insight into all of your customer interactions.

AWS CloudHSM

a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.

Which of the below options is true of Amazon Cloud Directory?

a cloud-native, highly scalable, high-performance directory service that provides web-based directories to make it easy for you to organize and manage all your application resources such as users, groups, locations, devices, and policies, and the rich relationships between them.

Slowloris Attack

a denial-of-service attack program which allows an attacker to overwhelm a targeted server by opening and maintaining many simultaneous HTTP connections between the attacker and the target.

Three-tier architecture

a design of user computers and servers that consists of three categories, or tiers

AWS Marketplace

a digital catalog that includes thousands of software listings from independent software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS. For each listing in AWS Marketplace, you can access detailed information on pricing options, available support, and reviews from other AWS customers.

Amazon QuickSight

a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data

Amazon Elastic Kubernetes Service (Amazon EKS)

a fully managed service that you can use to run Kubernetes on AWS. Kubernetes is open-source software that enables you to deploy and manage containerized applications at scale.

Amazon Elastic Container Service (Amazon ECS)

a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS.

AWS Storage Gateway

a hybrid storage service that enables on-premises applications to seamlessly use AWS cloud storage. Enterprises can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration. The storage gateway connects to AWS storage services, such as Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, Amazon EBS, and AWS Backup, providing storage for files, volumes, snapshots, and virtual tapes in AWS.

Amazon DynamoDB

a key-value database service. It delivers single-digit millisecond performance at any scale

Amazon Simple Queue Service (Amazon SQS)

a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.

Amazon Comprehend

a natural language processing (NLP) service that uses machine learning to find insights and relationships in text

ACID (Atomicity, Consistency, Isolation, and Durability)

a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc. If one of the steps in the transaction fail, then the steps must be rolled back to the state before any change was made to the database.

Amazon Lightsail

provides a low-cost Virtual Private Server (VPS) in the cloud; include everything you need to jumpstart your project - virtual machines, containers, databases, CDN, load balancers, SSD-based storage, DNS management, etc. - for a low, predictable monthly price.

AWS Transit Gateway

a network transit hub that simplifies how customers interconnect all of their VPCs, across thousands of AWS accounts and into their on-premises networks. Customers can easily and quickly connect into a single centrally-managed gateway, and rapidly growing the size of their network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. This hub and spoke model significantly simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network. Any new VPC is simply connected to the Transit Gateway and is then automatically available to every other network that is connected to the Transit Gateway. This ease of connectivity makes it easy to scale networks as business grow.

AWS Shield Advanced

a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to mitigate complex DDoS attacks.

What is IPsec?

a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a data stream.

Amazon Simple Notification Service (Amazon SNS)

a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers. This is similar to the coffee shop; the cashier provides coffee orders to the barista who makes the drinks. In Amazon SNS, subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.

Amazon Elastic File System (Amazon EFS)

a scalable file system used with AWS Cloud services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can scale on demand to petabytes without disrupting applications. - Shared access to files; - Web Serving; - Big Data and Analytics; - User Home Directory; - Content Management; - Container Storage - Highly available within AWS Region; - SIngle or multi AZ deployment; - Designed for similar durability as S3;

Amazon Macie

a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.

AWS Fargate

a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. When using AWS Fargate, you do not need to provision or manage servers. AWS Fargate manages your server infrastructure for you. You can focus more on innovating and developing your applications, and you pay only for the resources that are required to run your containers.

AWS Direct Connect

a service that enables you to establish a dedicated private connection between your data center and a VPC. Suppose that there is an apartment building with a hallway directly linking the building to the coffee shop. Only the residents of the apartment building can travel through this hallway. This private hallway provides the same type of dedicated connection as AWS Direct Connect. Residents are able to get into the coffee shop without needing to use the public road shared with other customers. The private connection that AWS Direct Connect provides helps you to reduce network costs and increase the amount of bandwidth that can travel through your network.

Amazon Relational Database Service (RDS)

a service that enables you to run relational databases in the AWS Cloud. Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. With these capabilities, you can spend less time completing administrative tasks and more time using data to innovate your applications. You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application. Amazon RDS provides a number of different security options. Many Amazon RDS database engines offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data while it is being sent and received).

Amazon Elastic Block Store (Amazon EBS)

a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available. To create an EBS volume, you define the configuration (such as volume size and type) and provision it. After you create an EBS volume, it can attach to an Amazon EC2 instance. Because EBS volumes are for data that needs to persist, it's important to back up the data. You can take incremental backups of EBS volumes by creating Amazon EBS snapshots. - Low latency access to data; - Boot/Data Volumes for EC2 - Relations and NoSQL Database Storage; - Highly available within AZ - 99.999% within AZ; - Designed gor 99.8% - 99.9% durability;

Amazon GuardDuty

a service that provides intelligent threat detection for your AWS infrastructure and resources. It identifies threats by continuously monitoring the network activity and account behavior within your AWS environment. If GuardDuty detects any threats, you can review detailed findings about them from the AWS Management Console. Findings include recommended steps for remediation. You can also configure AWS Lambda functions to take remediation steps automatically in response to GuardDuty's security findings.

Amazon Simple Storage Service (Amazon S3)

a service that provides object-level storage. Amazon S3 stores data as objects in buckets. You can upload any type of file to Amazon S3, such as images, videos, text files, and so on. For example, you might use Amazon S3 to store backup files, media files for a website, or archived documents. Amazon S3 offers unlimited storage space. The maximum file size for an object in Amazon S3 is 5 TB. When you upload a file to Amazon S3, you can set permissions to control visibility and access to it. You can also use the Amazon S3 versioning feature to track changes to your objects over time. - Highly available within AWS Region - 99.99% for standard storage; - Designed for 99.9999999999% durability;

AWS Artifact

a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports.

Availability Zone

a single data center or a group of data centers within a Region. Availability Zones are located tens of miles apart from each other. This is close enough to have low latency (the time between when content requested and received) between Availability Zones. However, if a disaster occurs in one part of the Region, they are distant enough to reduce the chance that multiple Availability Zones are affected.

Edge locations

a site that Amazon CloudFront uses to store cached copies of your content closer to your customers for faster delivery.

AWS Snowcone

a small, rugged, and secure edge computing and data transfer device. It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage.

Amazon Machine Image (AMI)

a template that contains a software configuration (for example, an operating system, an application server, and applications). This pre-configured template save time and avoid errors when configuring settings to create new instances. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

AWS Cost Explorer

a tool that enables you to visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report of the costs and usage for your top five cost-accruing AWS services. You can apply custom filters and groups to analyze your data. For example, you can view resource usage at the hourly level.

HTTP Level Attack

a type of volumetric distributed denial-of-service (DDoS) attack designed to overwhelm a targeted server with HTTP requests

network access control list (ACL)

a virtual firewall that controls inbound and outbound traffic at the subnet level. Each AWS account includes a default network ACL. When configuring your VPC, you can use your account's default network ACL or create custom network ACLs. By default, your account's default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is denied until you add rules to specify which traffic to allow. Additionally, all network ACLs have an explicit deny rule. This rule ensures that if a packet doesn't match any of the other rules on the list, the packet is denied.

Security Group

a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance. By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic to allow or deny. For this example, suppose that you are in an apartment building with a door attendant who greets guests in the lobby. You can think of the guests as packets and the door attendant as a security group. As guests arrive, the door attendant checks a list to ensure they can enter the building. However, the door attendant does not check the list again when guests are exiting the building If you have multiple Amazon EC2 instances within a subnet, you can associate them with the same security group or use different security groups for each instance.

Amazon Elastic Map Reduce (Amazon EMR)

a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3).

Amazon CloudWatch

a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics. CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.

AWS Trusted Advisor

a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices. Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices.

AWS Management Console

a web-based interface for accessing and managing AWS services. You can quickly access recently used services and search for other services by name, keyword, or acronym. The console includes wizards and automated workflows that can simplify the process of completing tasks.You can also use the AWS Console mobile application to perform tasks such as monitoring resources, viewing alarms, and accessing billing information. Multiple identities can stay logged into the AWS Console mobile app at the same time.

AWS Service Catalog

allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

AWS CloudEndure Disaster Recovery

an agent-based solution that lets you recover your environment from unexpected infrastructure or application outages, data corruption, ransomware, or other malicious attacks. AWS Elastic Disaster Recovery, the next generation of CloudEndure Disaster Recovery, is now the recommended service for disaster recovery to AWS.

Amazon Aurora

an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases. Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O) operations, while ensuring that your database resources remain reliable and available. Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.

Mongo DB

an open source, document-oriented, non-relational DBMS

Elastic Load Balancing (ELB)

automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. A load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. This means that as you add or remove Amazon EC2 instances in response to the amount of incoming traffic, these requests route to the load balancer first. Then, the requests spread across multiple resources that will handle them. For example, if you have multiple Amazon EC2 instances, Elastic Load Balancing distributes the workload across the multiple instances so that no single instance has to carry the bulk of it.

AWS Shield Standard

automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks. As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis techniques to detect malicious traffic in real time and automatically mitigates it.

Amazon WorkSpace

cloud-based virtual desktop that can act as a replacement for a traditional desktop; your employees get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device

Public subnet

contain resources that need to be accessible by the public, such as an online store's website.

Private Subnet

contain resources that should be accessible only through your private network, such as a database that contains customers' personal information and order histories. In a VPC, subnets can communicate with each other. For example, you might have an application that involves Amazon EC2 instances in a public subnet communicating with databases that are located in a private subnet.

Storage optimized instances

designed for workloads that require high, sequential read and write access to large datasets on local storage. Examples of workloads suitable for storage optimized instances include distributed file systems, data warehousing applications, and high-frequency online transaction processing (OLTP) systems. In computing, the term input/output operations per second (IOPS) is a metric that measures the performance of a storage device. It indicates how many different input or output operations a device can perform in one second. Storage optimized instances are designed to deliver tens of thousands of low-latency, random IOPS to applications.

Amazon EC2 Savings Plans

enable you to reduce your compute costs by committing to a consistent amount of compute usage for a 1-year or 3-year term. This term commitment results in savings of up to 72% over On-Demand costs. Any usage up to the commitment is charged at the discounted Savings Plan rate (for example, $10 an hour). Any usage beyond the commitment is charged at regular On-Demand rates.

Amazon S3 Transfer Acceleration

enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

AWS Free Tier

enables you to begin using certain services without having to worry about incurring costs for the specified period. Three types of offers are available: Always Free 12 Months Free Trials

AWS Command Line Interface (AWS CLI)

enables you to control multiple AWS services directly from the command line within one tool. AWS CLI is available for users on Windows, macOS, and Linux. By using AWS CLI, you can automate the actions that your services and applications perform through scripts. For example, you can use commands to launch an Amazon EC2 instance, connect an Amazon EC2 instance to a specific Auto Scaling group, and more.

AWS Identity and Access Management (IAM)

enables you to manage access to AWS services and resources securely. IAM gives you the flexibility to configure access based on your company's specific operational and security needs. You do this by using a combination of IAM features, which are explored in detail in this lesson: IAM users, groups, and roles IAM policies Multi-factor authentication

AWS Database Migration Service (AWS DMS)

enables you to migrate relational databases, and other types of data stores. With AWS DMS, you move data between a source database and a target database. The source and target databases can be of the same type or different types. During the migration, your source database remains operational, reducing downtime for any applications that rely on the database.

AWS Key Management Service (AWS KMS)

enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can also control the use of keys across a wide range of services and in your applications.

Basic Support

free for all AWS customers. It includes access to whitepapers, documentation, and support communities. With Basic Support, you can also contact AWS for billing questions and service limit increases. With Basic Support, you have access to a limited selection of AWS Trusted Advisor checks. Additionally, you can use the AWS Personal Health Dashboard, a tool that provides alerts and remediation guidance when AWS is experiencing events that may affect you.

Amazon Inspector

helps to improve the security and compliance of applications by running automated security assessments. It checks applications for security vulnerabilities and deviations from security best practices, such as open access to Amazon EC2 instances and installations of vulnerable software versions. After Amazon Inspector has performed an assessment, it provides you with a list of security findings. The list prioritizes by severity level, including a detailed description of each security issue and a recommendation for how to fix it. However, AWS does not guarantee that following the provided recommendations resolves every potential security issue. Under the shared responsibility model, customers are responsible for the security of their applications, processes, and tools that run on AWS services.

Amazon EMR

helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.

AWS Secrets Manager

helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily store, rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

AWS Well-Architected Framework

helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure your architecture against best practices and design principles and identify areas for improvement.

Compute optimized instances

ideal for compute-bound applications that benefit from high-performance processors such as high-performance web servers, compute-intensive applications servers, and dedicated gaming servers. You can also use compute optimized instances for batch processing workloads that require processing many transactions in a single group.

Amazon EC2 pricing - On-demand

ideal for short-term, irregular workloads that cannot be interrupted. No upfront costs or minimum contracts apply. The instances run continuously until you stop them, and you pay for only the compute time you use.Sample use cases for On-Demand Instances include developing and testing applications and running applications that have unpredictable usage patterns. On-Demand Instances are not recommended for workloads that last a year or longer because these workloads can experience greater cost savings using Reserved Instances.

Spot Instances

ideal for workloads with flexible start and end times, or that can withstand interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings at up to 90% off of On-Demand prices.

Scalability

involves beginning with only the resources you need and designing your architecture to automatically respond to changing demand by scaling out or in. As a result, you pay for only the resources you use. You don't have to worry about a lack of computing capacity to meet your customers' needs.

AWS Lambda

is a compute service that lets you run code without provisioning or managing servers. It executes your code only when needed and scales automatically, from a few requests per day to thousands per second

AWS Snowball

is a service that accelerates transferring large amounts of data into and out of AWS using physical storage appliances, bypassing the Internet. AWS Snowball offers two types of devices: Snowball Edge Storage Optimized devices are well suited for large-scale data migrations and recurring transfer workflows, in addition to local computing with higher capacity needs. Storage: 80 TB of hard disk drive (HDD) capacity for block volumes and Amazon S3 compatible object storage, and 1 TB of SATA solid state drive (SSD) for block volumes. Compute: 40 vCPUs, and 80 GiB of memory to support Amazon EC2 sbe1 instances (equivalent to C5). Snowball Edge Compute Optimized provides powerful computing resources for use cases such as machine learning, full motion video analysis, analytics, and local computing stacks. Storage: 42-TB usable HDD capacity for Amazon S3 compatible object storage or Amazon EBS compatible block volumes and 7.68 TB of usable NVMe SSD capacity for Amazon EBS compatible block volumes. Compute: 52 vCPUs, 208 GiB of memory, and an optional NVIDIA Tesla V100 GPU. Devices run Amazon EC2 sbe-c and sbe-g instances, which are equivalent to C5, M5a, G3, and P3 instances.

AWS Snowmobile

is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck

AWS Federation

ith Federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory.

AWS Pricing Calculator

lets you explore AWS services and create an estimate for the cost of your use cases on AWS. In the AWS Pricing Calculator, you can enter details such as the kind of operating system you need, memory requirements, and input/output (I/O) requirements. By using the AWS Pricing Calculator, you can review an estimated comparison of different EC2 instance types across AWS Regions.

Amazon Virtual Private Cloud (Amazon VPC)

lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. Within a virtual private cloud (VPC), you can organize your resources into subnets. A subnet is a section of a VPC that can contain resources such as Amazon EC2 instances.

software development kits (SDKs)

make it easier for you to use AWS services through an API designed for your programming language or platform. SDKs enable you to use AWS services with your existing applications or create entirely new applications that will run on AWS. To help you get started with using SDKs, AWS provides documentation and sample code for each supported programming language. Supported programming languages include C++, Java, .NET, and more.

Dedicated Hosts

physical servers with Amazon EC2 instance capacity that is fully dedicated to your use. Of all the Amazon EC2 options that were covered, Dedicated Hosts are the most expensive.

General Purpose instances

provide a balance of compute, memory, and networking resources. You can use them for a variety of workloads, such as: - application servers; - gaming servers; - backend servers for enterprise applications; - small and medium databases; Suppose that you have an application in which the resource needs for compute, memory, and networking are roughly equivalent. You might consider running it on a general purpose instance because the application does not require optimization in any single resource area.

Containers

provide you with a standard way to package your application's code and dependencies into a single object. You can also use containers for processes and workflows in which there are essential requirements for security, reliability, and scalability.

AWS CloudTrail

records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a "trail" of breadcrumbs (or a log of actions) that someone has left behind them.

Penetration testing

the practice of testing a network or web application to find security vulnerabilities that an attacker could exploit.

AWS Migration Hub

used to track the progress of application migrations to AWS.

AWS Site-to-Site VPN

utilizes Internet Protocol Security (IPSec) to establish encrypted connectivity between your on-premises network and AWS over the Internet. With AWS Client VPN, your users can access AWS or on-premises resources from any location using a secure TLS connection.


Set pelajaran terkait

Either or fallacy/ false dilemma

View Set

Chapter 1 - Introduction to Computers and Programming

View Set

Leadership & Management Quizzes 4, 5, 6

View Set