AWS Cloud Practitioner Certification

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Workload

A set of components that together delivery business value. The workload is usually the level of detail that business and technology leaders communicate about. Component - the code, configuration, and AWS resources that together delivery against a requirement. A component is often the unit of technical ownership, and is decoupled from other components

High availability

A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. Introduce redundancy (standby or active) Detect failure with Elastic Load balancing and Amazon Route 53 to configure health checks and mask failure by routing traffic to healthy endpoints. Replace unhealthy nodes automatically using Auto Scaling or by using the Amazon Ec2 auto-recovery feature or services such as AWS Elastic Beanstalk. Design good health checks with ELB

Design Principles for Performance Efficiency

1) Democratize advanced technologies (consume services while focusing on product development rather than resource provisioning and management) 2) Go global in minutes (easily deploy system in multiple regions) 3) Use serverless architectures 4) Experiment more often 5) Mechanical sympathy

Best Practice Areas for Cost Optimization in the cloud

1) Expenditure Awareness 2) Cost-Effective Resources 3) Matching supply and demand 4) Optimizing Over Time

Fundamental drivers of cost with AWS

1) Compute 2) Storage 3) Outbound data transfer In most cases there is no charge for inbound data transfer or for data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate. The more data you transfer, the less you pay per GB.

Reliability

1 of the 5 Pillars of Well Architected Framework The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues

Security

1 of the 5 Pillars of Well Architected Framework The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies

Operational Excellence

1 of the 5 Pillars of Well Architected Framework The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures

Cost Optomization

1 of the 5 Pillars of Well Architected Framework The ability to run systems to deliver business value at the lowest price point

Performance Efficiency

1 of the 5 Pillars of Well Architected Framework The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve

Design Principles of Cost Optimization

1) Adopt a consumption model (pay for computing resources and increase/decrease usage depending on business requirements) 2) Measure overall efficiency (measure business output of the workload and the cost associated with delivering it) 3) Stop spending money on data center operations 4) Analyze and attribute expenditure 5) Use managed and application level service to reduce cost of ownership

Considerations for estimating Amazon EC2 costs

1) Clock hours of server time (resources incur charges when they are running) 2) Instance type 3) Pricing model 4) Number of instances 5) Load balancing (the number of hours the elastic load balancer runs and the amount of data i processes can contribute to the monthly cost) 6) Detailed monitoring (CloudWatch) (fixed monthly rate) 7) Auto Scaling (adjusts the number of EC2 instance in your deployment according to conditions) (free service) 8) Elastic IP Addresses (1 EIP with an instance at no charge) 9) Operating systems and software packages (operating system prices are included in instance prices, unless you choose to bring your own licenses)

Cloud Computing Deployment Models

1) Cloud 2) Hybrid 3) On-premises (private cloud)

Key AWS Services for Cost Optimization

1) Expenditure Awareness: AWS Cost Explorer 2) Cost-Effective Resources: Cost Explorer for Reserved Instance recommendations. AWS CloudWatch and Trusted Advisor to help right size your resources. Amazon Aurora on RDS to remove database licensing costs. AWS Direct Connect and Amazon CloudFront can be used to optimize data transfer 3) Matching supply and demand: Auto Scaling 4) Optimizing Over Time: AWS News Blog and What's New Section. AWS Trusted Advisor finds opportunities to save money

Best practice areas for reliability in the cloud

1) Foundations 2) Change Management 3) Failure Management

Best practice areas for security in the cloud

1) Identity and Access Management 2) Detective Controls 3) Infrastructure Protection 4) Data Protection 5) Incident Response

Key AWS Services for Security Best Practice Areas

1) Identity and Access Management: AWS Identity and Access Management (IAM) service (with Multi-Factor Authentication) 2) Detective Controls: AWS CloudTrails (records API calls), AWS Config (provides detailed inventory of AWS resources and configuration), Amazon GuardDuty (threat detection service), AWS CloudWatch (monitoring service for AWS resources. can trigger events to automate security responses) 3) Infrastructure Protection: Amazon Virtual Private Cloud (VPC) (enables you to launch resources into a virtual network), Amazon CloudFront (global content delivery network that securely delivers data), AWS Shield (integrates with CloudFront for DDoS mitigation), AWS Web Application Firewall (WAF) (deployed on either Amazon CloudFront or Application Load Balancer) 4) Data Protection: Elastic Load Balancer (ELB), Elastic Block Store (Amazon EBS), Amazon S3, and Amazon Relational Database Service (Amazon RDS) include encryption capabilities to protect your data in transit and at rest. Amazon Macie automatically discovers, classifies, and protects sensitive data. AWS Key Management Service (AMS KMS) makes it easy for you to create and control keys used for encryption 5) Incident Response: Amazon Identity and Access Management (IAM) can be used to grant appropriate authorization to incident response teams and response tools. AWS CloudFormation can be used to create a trusted environment or clean room for conducting investigations. Amazon CloudWatch Events allow you to create rules that trigger automated responses including AWS Lambda.

Design Principles of Security

1) Implement a strong identity foundation. Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize privilege management and reduce or eliminate long-term credentials 2) Enable traceability. Integrate logs and metrics with systems to automatically respond and take action 3) Apply security at all layers 4) Automate security best practices 5) Protect data in transit and at rest 6) Keep people away from data 7) Prepare for security events

Cloud Computing Models

1) Infrastructure as a Service (IaaS) - basic building blocks for cloud IT. Provides you with the highest level of flexibility and management control over your IT resources. Most similar to existing IT departments. 2) Platform as a Service (PaaS) - removes the need for your organization to manage the underlying infrastructure (hardware and operating systems) and allows you to focus on the deployment and management of your applications. More efficient because you do not worry about resource procurements, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. 3) Software as a Service (SaaS) - completed product that is run and managed by the service provider. End user applications.

Ways to pay for Amazon EC2

1) On Demand Instances 2) Spot Instances 3) Reserved Instances 4) Dedicated Hosts

5 Pillars of Well Architected Framework

1) Operational Excellence 2) Security 3) Reliability 4) Performance Efficiency 5) Cost Optimization

Design Principles of Operational Excellence

1) Perform operations as code. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events. This limits human error and enables consistent responses to events. 2) Annotate documentation. You can automate the creation of annotated documentation after every build. Annotated documentation can be used by people and systems. Use annotations as an input to your operations code. 3) Make frequent, small, reversible changes. Allow components to be updated regularly 4) Refine operations procedure frequently. As you evolve your workload, evolve your procedures appropriately. Set up game days to review and validate procedures 5) Anticipate failure. Perform "pre-mortem" exercises to identify potential sources of failure 6) Learn from all operational failures. Share what is learned across teams and through the entire organization

Best practice areas for operational excellence in the cloud

1) Prepare - common standards, validation, practice failure 2) Operate - health of workload and operations 3) Evolve - dedicate work cycles to making continuous incremental improvements

AWS Design Principles

1) Scalability 2) Disposable resources 3) Automation 4) Loose coupling managed services instead of servers 5) Flexible data storage options

Best Practice Areas for Performance Efficiency in the cloud

1) Selection (compute, storage, database, network options) 2) Review (evaluate new releases) 3) Monitoring 4) Tradeoffs

General Design Principles of the Well Architected Framwork

1) Stop guessing your capacity needs 2) Test systems at production scale 3) Automate to make architectural experimentation easer 4) Allow for evolutionary architectures 5) Drive architectures using data 6) Improve through game days (game days - simulate events in production)

Design Principles for Reliability in the Cloud

1) Test recovery procedures 2) Automatically recover from failure 3) Scale horizontally to increase aggregate system availability 4) Stop guessing capacity 5) Manage change using automation

Data Lake

A data lake is an architectural approach that allows you to store massive amounts of data in a central location so that it's readily available to be categorized, processed, analyzed, and consumed by diverse groups within your organization. Since data can be stored as-is, you do not have to convert it to a predefined schema, and you no longer need to know what questions to ask about your data beforehand. This enables you to select the correct technology to meet your specific analytical requirements.

Data Warehouse

A data warehouse is a specialized type of relational database, which is optimized for analysis and reporting of large amounts of data. It can be used to combine transactional data from disparate sources (such as user behavior in a web application, data from your finance and billing system, or customer relationship management or CRM) to make them available for analysis and decision-making. Amazon Redshift - a managed data warehouse service that is designed to operate at less than a tenth the cost of traditional solutions

Graph Databases

A graph database uses graph structures for queries. A graph is defined as consisting of edges (relationships), which directly relate to nodes (data entities) in the store. The relationships enable data in the store to be linked together directly, which allows for the fast retrieval of complex hierarchical structures in relational systems. For this reason, graph databases are purposely built to store and navigate relationships and are typically used in use cases like social networking, recommendation engines, and fraud detection, in which you need to be able to create relationships between data and quickly query these relationships. Amazon Neptune is a fully-managed graph database service.

Search Service

A search service can be used to index and search both structured and free text format and can support functionality that is not available in other databases, such as customizable result ranking, faceting for filtering, synonyms, and stemming On AWS, you can choose between Amazon CloudSearch and Amazon Elasticsearch Service (Amazon ES). Amazon CloudSearch is a managed service that requires little configuration and will scale automatically. Amazon ES offers an open-source API and gives you more control over the configuration details. Amazon ES has also evolved to become more than just a search solution. It is often used as an analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

AWS Elastic Beanstalk

AWS Elastic Beanstalk follows the hybrid model. It provides preconfigured runtime environments—each initiated from its own AMI—but allows you to run bootstrap actions through .ebextensions configuration files, and configure environmental variables to parameterize the environment differences.

AWS Management Console

Access and manage AWS services through this user interface

Key AWS Service for Performance Efficiency

Amazon CloudWatch - monitors your resources and systems, providing visibility to your overall performance and operational health 1) Selection: -Compute: Auto Scaling -Storage: Amazon Elastic Block Store (EBS) provides wide range of storage options (SSD and provisioned input/output operations per second (PIOPS)) that allow you to optimize your use case. Amazon S3 provides serverless content delivery and S3 transfer acceleration enables fast, easy, and secure transfer of files over long distances. -Database: Amazon Relational Database Service (RDS) provides a wide range of database features (such as PIOPS and read replicas) that allow you to optimize for your use case. Amazon DynamoDB provides single-digit millisecond latency -Network: Amazon Route 53 provides latency based routing. Amazon VPC endpoints and AWS Direct Connect can reduce network distance or kitter 2) Review: AWS Blog and What's New section on AWS website 3) Monitoring: Amazon CloudWatch provides metrics, alarms, and notifications that you can use with AWS Lambda to trigger actions 4) Tradeoffs: Amazon ElastiCache, Amazon CloudFront, and AWS Snowball are services that can improve performance. Read replicas in Amazon Relational Database Service can allow you to scale read-heavy workloads

Stateless Components

Amazon DynamoDB - store information in a database to avoid information needing to be passed from the application to a server When requiring storage of large files, placing those files in a shared storage layer such as Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS), you can avoid the introduction of stateful components A complex multi-step workflow is another example where you must track the current state of each execution. You can use AWS Step Functions to centrally store execution history and make these workloads stateless. Databases are stateful

Amazon DynamoDB Accelerator

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, inmemory cache for DynamoDB that delivers performance improvements from milliseconds to microseconds, for high throughput. DAX adds in-memory acceleration to your DynamoDB tables without requiring you to manage cache invalidation, data population, or cluster management.

Amazon Elastic Block Store (EBS)

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes are off-instance storage that persists independently from the life of an instance. They are analogous to virtual disks in the cloud. Amazon EBS provides two volume types: • SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. • HDD-backed volumes are optimized for large streaming workloads where through Price Factors: 1) Volumes 2) Snapshots 3) Data transfer

Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. The simple web service interface of Amazon EC2 allows you to obtain and configure capacity with minimal friction with complete control of your computing resources. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change

Amazon S3 Glacier

Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. It is designed to deliver 99.999999999 percent durability, with comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements. Amazon Glacier provides query-in-place functionality, allowing you to run powerful analytics directly on your archived data at rest. Amazon Glacier provides low-cost, long-term storage Starting at $0.004 per GB per month, Amazon Glacier allows you to archive large amounts of data at a very low cost. You pay only for what you need, with no minimum commitments or upfront fees. Other factors determining pricing include requests and data transfers out of Amazon Glacier (incoming transfers are free).

Amazon Redshift Scalability

Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing (MPP), columnar data storage, and targeted data compression encoding schemes. It is particularly suited to analytic and reporting workloads against very large data sets. The Amazon Redshift MPP architecture enables you to increase performance by increasing the number of nodes in your data warehouse cluster. Amazon Redshift Spectrum enables Amazon Redshift SQL queries against exabytes of data in Amazon S3, which extends the analytic capabilities of Amazon Redshift beyond data stored on local disks in the data warehouse to unstructured data, without the need to load or transform data.

Amazon Simple Storage Service

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere: websites, mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999 percent durability, and stores data for millions of applications used by market leaders in every industry. As with other AWS services, Amazon S3 provides the simplicity and cost-effectiveness of pay-as-you-go pricing. Price Factors: 1) Storage Class (standard versus standard infrequent access) 2) Storage 3) Requests 4) Data transfer

Amazon Athena

Analytics Tool. An interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

AWS Glue

Analytics Tool. Data Catalog. Athena is out-of-the-box integrated with AWS Glue Data Catalog, allowing you to create a unified metadata repository across various services, crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions, and maintain schema versioning. You can also use Glue's fullymanaged ETL capabilities to transform data or convert it into columnar formats to optimize cost and improve performance.

Loose Coupling

As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. Loose coupling is a crucial element if you want to take advantage of the elasticity of cloud computing where new resources can be launched or terminated at any point in time (which requires implementing service discovery) Ex: Asynchronous integration. This model is suitable for any interaction that does not need an immediate response and where an acknowledgement that a request has been registered will suffice. It involves one component that generates events and another that consumes them. The two components do not integrate through direct point-topoint interaction but usually through an intermediate durable storage layer, such as an SQS queue or a streaming data platform such as Amazon Kinesis, cascading Lambda events, AWS Step Functions, or Amazon Simple Workflow Service.

Asynchronous replication

Asynchronous replication decouples the primary node from its replicas at the expense of introducing replication lag. This means that changes on the primary node are not immediately reflected on its replicas. Asynchronous replicas are used to horizontally scale the system's read capacity for queries that can tolerate that replication lag. It can also be used to increase data durability when some loss of recent transactions can be tolerated during a failover. For example, you can maintain an asynchronous replica of a database in a separate AWS Region as a disaster recovery solution.

Service Discovery

Can be done through Elastic Load Balancing (ELB) Because each load balancer gets its own hostname, you can consume a service through a stable endpoint. This can be combined with DNS and private Amazon Route 53 zones, so that the particular load balancer's endpoint can be abstracted and modified at any time

Golden Images

Certain AWS resource types, such as EC2 instances, Amazon RDS DB instances, and Amazon Elastic Block Store (Amazon EBS) volumes, can be launched from a golden image, which is a snapshot of a particular state of that resource. When compared to the bootstrapping approach, a golden image results in faster start times and removes dependencies to configuration services or third-party repositories. This is important in auto-scaled environments where you want to be able to quickly and reliably launch additional resources as a response to demand changes. Items that do not change often or that introduce external dependencies will typically be part of your golden image.

Configuration drift

Changes and software patches applied through time can result in untested and heterogeneous configurations across different environments. You can solve this problem with an immutable infrastructure pattern. With this approach, a server—once launched—is never updated. Instead, when there is a problem or need for an update, the problem server is replaced with a new server that has the latest configuration. This enables resources to always be in a consistent (and tested) state, and makes rollbacks easier to perform. This is more easily supported with stateless architectures.

Edge Caching

Copies of static content (images, CSS files, or streaming pre-recorded video) and dynamic content (responsive HTML, live video) can be cached at an Amazon CloudFront edge location, which is a CDN with multiple points of presence around the world. Edge caching allows content to be served by infrastructure that is closer to viewers, which lowers latency and gives you the high, sustained data transfer rates necessary to deliver large popular objects to end users at scale.

AWS Command Line Interface (CLI)

Unified tool to manage your AWS services. Can control multiple AWS services from the command line and automate them through scripts

Software Development Kits

Simplify using AWS services in your applications with an Application Program Interface (API) tailored to your programming language or platform

Distributed Processing

Use cases that involve the processing of very large amounts of data—anything that can't be handled by a single compute resource in a timely manner—require a distributed processing approach. By dividing a task and its data into many small fragments of work, you can execute them in parallel across a set of compute resources.

Containers

Docker—an open-source technology that allows you to build and deploy distributed applications inside software containers AWS Elastic Beanstalk, Amazon Elastic Container Service (Amazon ECS) and AWS Fargate let you deploy and manage multiple containers across a cluster of EC2 instances. You can build golden Docker images and use the ECS Container Registry to manage them.

Availability Zones

Each AWS Region contains multiple distinct locations, or Availability Zones. Each Availability Zone is engineered to be independent from failures in other Availability Zones. An Availability Zone is a data center, and in some cases, an Availability Zone consists of multiple data centers. Availability Zones within a Region provide inexpensive, low-latency network connectivity to other zones in the same Region. This allows you to replicate your data across data centers in a synchronous manner so that failover can be automated and be transparent for your users.

Implement Session Affinity

For HTTP and HTTPS traffic, you can use the sticky sessions feature of an Application Load Balancer to bind a user's session to a specific instance. With this feature, an Application Load Balancer will try to use the same server for that user for the duration of the session.

Pull model to distribute the workload to multiple nodes in your environment

Instead of a load balancing solution, you can implement a pull model for asynchronous, event-driven workloads. In a pull model, tasks that need to be performed or data that needs to be processed can be stored as messages in a queue using Amazon Simple Queue Service (Amazon SQS) or as a streaming data solution such as Amazon Kinesis. Multiple compute resources can then pull and consume those messages, processing them in a distributed fashion.

NoSQL databases

NoSQL databases trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. NoSQL databases use a variety of data models, including graphs, key-value pairs, and JSON documents, and are widely recognized for ease of development, scalable performance, high availability, and resilience. Amazon DynamoDB is a fast and flexible NoSQL database service for applications that need consistent, single-digit, millisecond latency at any scale.31 It is a fully managed cloud database and supports both document and key-value store models.

Implement Distributed Processing

Offline batch jobs can be horizontally scaled by using distributed data processing engines such as AWS Batch, AWS Glue, and Apache Hadoop. On AWS, you can use Amazon EMR to run Hadoop workloads on top of a fleet of EC2 instances without the operational complexity. For real-time processing of streaming data, Amazon Kinesis partitions data in multiple shards that can then be consumed by multiple Amazon EC2 or AWS Lambda resources to achieve scalability.

Key AWS Services for Operational Excellence Best Practice Areas

Prepare: AWS Config and AWS Config rules can be used to create standards for workloads and to determine if environments are compliant with those standards before being put into production Operate: Amazon CloudWatch allows you to monitor the operational health of a workload Evolve: Amazon Elasticsearch Service (Amazon ES) allows you to analyze your log data to gain actionable insights quickly and securely

Versioning

Preserves, retrieves, and restores any object version of an object stored on Amazon S3

Quorum-based replication

Quorum-based replication combines synchronous and asynchronous replication to overcome the challenges of large-scale distributed database systems. You must ascertain how much data you expect to lose and how quickly you need to resume operations. For example, the Redis engine for Amazon ElastiCache supports replication with automatic failover, but the Redis engine's replication is asynchronous. During a failover, it is highly likely that some recent transactions will be lost. However, Amazon RDS, with its Multi-AZ feature, is designed to provide synchronous replication to keep data on the standby node up-to-date with the primary.

Relational Databases

Relational databases (also known as RDBS or SQL databases) normalize data into well-defined tabular structures known as tables, which consist of rows and columns. They provide a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner. Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud with support for many familiar database engines. For any production relational database, we recommend using the Amazon RDS Multi-AZ deployment feature, which creates a synchronously replicated standby instance in a different Availability Zone.

Reservations

Reservations provide you with the ability to receive a greater discount, up to 75 percent, by paying for capacity ahead of time

Scaling Horizontally

Scaling horizontally takes place through an increase in the number of resources, such as adding more hard drives to a storage array or adding more servers to support an application. This is a great way to build internet-scale applications that leverage the elasticity of cloud computing. Not all architectures are designed to distribute their workload to multiple resources, so let's examine some possible scenarios

Scaling Vertically

Scaling vertically takes place through an increase in the specifications of an individual resource, such as upgrading a server with a larger hard drive or a faster CPU. With Amazon EC2, you can stop an instance and resize it to an instance type that has more RAM, CPU, I/O, or networking capabilities. This way of scaling can eventually reach a limit, and it is not always a cost-efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially in the short term.

Spot Instances

Spot Instances are an Amazon EC2 pricing mechanism that lets you purchase spare computing capacity with no upfront commitment at discounted hourly rates.

Synchronous replication

Synchronous replication only acknowledges a transaction after it has been durably stored in both the primary location and its replicas. It is ideal for protecting the integrity of data from the event of a failure of the primary node. Synchronous replication can also scale read capacity for queries that require the most up-to-date data (strong consistency). The drawback of synchronous replication is that the primary node is coupled to the replicas. A transaction can't be acknowledged before all replicas have performed the write. This can compromise performance and availability, especially in topologies that run across unreliable or high-latency network connections. For the same reason, it is not recommended to maintain many synchronous replicas.

Key AWS Services for Reliability best practice areas

The AWS service that is essential to Reliability is Amazon CloudWatch, which monitors runtime metrics. 1) Foundations: AWS IAM, Amazon VPC (lets you provision a private, isolated section of the AWS Cloud where you can launch AWS resources in a virtual network), AWS Trusted Advisor (provides visibility into service limits), AWS Shield (a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications on AWS) 2) Change Management: AWS CloudTrail, AWS Config, Amazon Auto Scaling, Amazon CloudWatch 3) Failure Management: AWS CloudFormation (provides templates for the creation of AWS resources and provisions them in an orderly and predictable fashion), Amazon S3 (highly durable service to keep backups), Amazon Glacier (highly durable archives), AWS KMS (provides a reliable key management system)

Stateless Applications

When users or services interact with an application they will often perform a series of interactions that form a session. A session is unique data for users that persists between requests while they use the application. A stateless application is an application that does not need knowledge of previous interactions and does not store session information. For example, an application that, given the same input, provides the same response to any end user, is a stateless application. Stateless applications can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request. Without stored session data, you can simply add more compute resources as needed. When that capacity is no longer required, you can safely terminate those individual resources, after running tasks have been drained. Those resources do not need to be aware of the presence of their peers—all that is required is a way to distribute the workload to them.

Bootstrapping

When you launch an AWS resource such as an EC2 instance or Amazon Relational Database Service (Amazon RDS) DB instance, you start with a default configuration. You can then execute automated bootstrapping actions, which are scripts that install software or copy data to bring that resource to a particular state. You can parameterize configuration details that vary between different environments (such as production or test) so that you can reuse the same scripts without modifications.

Push model to distribute the workload to multiple nodes in your environment

With a push model, you can use Elastic Load Balancing (ELB) to distribute a workload. ELB routes incoming application requests across multiple EC2 instances. When routing traffic, a Network Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model to handle millions of requests per second. With the adoption of container-based services, you can also use an Application Load Balancer. An Application Load Balancer provides Layer 7 of the OSI model and supports content-based routing of requests based on application traffic. Alternatively, you can use Amazon Route 53 to implement a DNS round robin. In this case, DNS responses return an IP address from a list of valid hosts in a round-robin fashion. While easy to implement, this approach does not always work well with the elasticity of cloud computing. This is because even if you can set low time to live (TTL) values for your DNS records, caching DNS resolvers are outside the control of Amazon Route 53 and might not always respect your settings.

Dedicated instances

You can provision your Amazon EC2 resources as Dedicated Instances. Dedicated Instances are Amazon EC2 instances that run on hardware dedicated to a single customer for additional isolation. Dedicated Instances (available with Amazon Elastic Compute Cloud (Amazon EC2)) run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer.


संबंधित स्टडी सेट्स

Preventive and Primary Care Services

View Set

Micro Economics Ch. 19, 6, 7, 8, 9, 11

View Set

ปฏิบัติเวชกรรมไทย(รวมทั้งหมด)

View Set

Ch.4- Standard provisions and Options

View Set

BA06: Power Tools, Advanced Manufacturing 1

View Set

Chapter 16 - Reporting the Statement of Cash Flows

View Set