Original ASA

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Dedicated Instances (Types of EC2 Instances)

Pay by the hour for instances that run on single-tenant hardware. Dedicated Instances that belong to different AWS accounts are physically isolated at a hardware level. Only your compute nodes run in single-tenant hardware; EBS volumes do not.

Binpack (ECS Task placement strategy types)

Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use and allows you to be cost-efficient. For example, you have running tasks in c5.2xlarge instances that are known to be CPU intensive but are not memory consuming. You can maximize your instances' memory allocation by launching tasks in them instead of spawning a new instance.

Spread (ECS Task placement strategy types)

Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs, instanceId, or host. Spread is typically used to achieve high availability by making sure that multiple copies of a task are scheduled across multiple instances. Spread across Availability Zones is the default placement strategy used for services.

Random (ECS Task placement strategy types)

Place tasks randomly. You use this strategy when task placement or termination does not matter.

Convertible RIs (Reserved Instance) (Types of EC2 Instances)

Provide a discount and the capability to change the attributes of the RI as long as the resulting RI is of equal or greater value.

Steps to Creating a Step Scaling Policy for an Auto Scaling Group

1. First, create your Launch Configuration for your EC2 instances. 2. Go to EC2 > Auto Scaling Groups > Create Auto Scaling group 3. Select your Launch Configuration and click Next Step. 4. Configure details for your Auto Scaling group. For example, the subnets in the VPC on where to place the EC2 instances. It's recommended to select subnets in multiple availability zones to improve the fault tolerance of your ASG. Advanced Details - in this section, you can check the Load Balancing option to select which load balancer to use for your ASG. Select the "Use scaling policies to adjust the capacity of this group" option and this will show an additional section for defining scaling policy. For this example, let's set 5 and 15 as the minimum and maximum size for this Auto Scaling group. In the Scale Group Size section, you will be able to set the scaling policy for the group. But this is only for simple scaling so you have to click the "Scale the Auto Scaling group using step or simple scaling policies" link to show more advanced options for step scaling. You should see the Increase Group Size and Decrease Group Size section after clicking it. 8. Now, we can set the step scaling policy for scaling out. Set a name for your "Increase Group Size" policy. Click "Add a new alarm" to add a CloudWatch rule on when to execute the policy. 9. Next, we can set the step scaling policy for the scaling in. Set a name for your "Decrease Group Size" policy. Click "Add a new alarm" to add a CloudWatch rule on when to execute the policy. On the Create Alarm box, you can set an SNS notification. Create a rule for whenever the Average CPU Utilization is less than or equal to 40 percent for at least 1 consecutive period of 5 minutes. Set a name for your alarm. Click Create Alarm. For the "Take the action" setting, we'll remove 10 percent of the group when CPU Utilization is less than or equal to 40 and greater than 30. Click "Add Step" to add another action, we'll remove 30 percent of the group when CPU Utilization is less than or equal to 30 percent. 10. Click Next: Configure Notifications to proceed. On this part, you can click "Add notification" so that you can receive an email whenever a specific event occurs.

AWS Well-Architected Framework

1. Operational Excellence 2. Security 3. Reliability 4. Performance Efficiency 5. Cost Optimization

ECS task placement constraints

A rule that is considered during task placement. You can use constraints to place tasks based on Availability Zone or instance type. ○ You can also associate attributes, which are name/value pairs, with your container instances and then use a constraint to place tasks based on attribute.

ECS Task Placement Strategies

A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can combine different strategy types to suit your application needs. ● Task placement strategies are a best effort. ● By default, Fargate tasks are spread across Availability Zones.

Infrastructure as Code (Best Practices when Architecting in the Cloud- Disposable Resources Instead of Fixed Servers)

AWS assets are programmable. You can apply techniques, practices, and tools from software development to make your whole infrastructure reusable, maintainable, extensible, and testable.

Infrastructure Management and Deployment (Best Practices when Architecting in the Cloud- Use Automation)

AWS automatically handles details, such as resource provisioning, load balancing, auto scaling, and monitoring, so you can focus on resource deployment.

Alarms and Events (Best Practices when Architecting in the Cloud- Use Automation)

AWS services will continuously monitor your resources and initiate events when certain metrics or conditions are met.

Amazon EC2 Auto Scaling has two primary process types. It will either Launch or Terminate an EC2 instance. Other process types are related to specific scaling features:

AddToLoadBalancer — Adds instances to the attached load balancer or target group when they are launched. ● AlarmNotification — Notifications from CloudWatch alarms that are associated with the group's scaling policies. ● AZRebalance — Balances the number of EC2 instances in the group evenly across all of the specified Availability Zones when the group becomes unbalanced. ● HealthCheck — Monitors the health of the instances and marks an instance as unhealthy if Amazon EC2 or AWS Elastic Load Balancing tells Amazon EC2 Auto Scaling that the instance is unhealthy. ● ReplaceUnhealthy — Terminates instances that are marked as unhealthy and then launches new instances to replace them. ● ScheduledActions — Performs scheduled scaling actions that you create or that are created by predictive scaling.

Capacity Reservations (Types of EC2 Instances)

Allows you to reserve capacity for your EC2 instances in a specific Availability Zone for any duration. No commitment required.

if a service involves servers or EC2 instances then it should also support security groups. Examples of these services are:

Amazon EC2 AWS Elastic Beanstalk Amazon Elastic Load Balancing Amazon RDS Amazon EFS Amazon EMR Amazon Redshift Amazon Elasticache

Amazon Elastic Kubernetes Service (EKS)

Amazon EKS lets you easily run and scale Kubernetes applications in the AWS cloud or on-premises. Kubernetes is not an AWS native service. Kubernetes is an open-source container-orchestration tool used for deployment and management of containerized applications. Amazon EKS just builds additional features on top of this platform so you can run Kubernetes in AWS much easier. If you have containerized applications running on-premises that you would like to move into AWS, but you wish to keep your applications as cloud agnostic as possible then EKS is a great choice for your workload. All the Kubernetes-supported tools and plugins you use on-premises will also work in EKS. You do not need to make any code changes when replatforming your applications.

AMI or Amazon Machine Image.

An AMI contains the OS, settings, and other applications that you will use in your server. AWS has many pre-built AMIs for you to choose from, and there are also custom AMIs created by other users which are sold on the AWS Marketplace for you to use. If you have created your own AMI before, it will also be available for you to select. AMIs cannot be modified after launch.

EC2 Auto Scaling Lifecycle Hooks

As your Auto Scaling group scale-out or scale-in your EC2 instances, you may want to perform custom actions before they start accepting traffic or before they get terminated. Auto Scaling Lifecycle Hooks allow you to perform custom actions during these stages.

Amazon Elastic Container Service supports four networking modes:

Bridge, Host, awsvpc, and None

Storage Optimized (Types of EC2 Instances)

Designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

Difference between standard and convertible reserved instance is that:

Convertible RI can Change instance families, operating system, tenancy, and payment option (Standard cannot) Standard RI can be bought/sold in Marketplace (Convertible cannot)

First things to do with Amazon Elastic Container Service (ECS)

Create an ECCS cluster then create a task definition. A task definition is like a spec sheet for the Docker containers that will be running in your ECS instances or tasks. The following are the parameters that are defined in a task definition: ● The Docker image to use with each container in your task ● CPU and memory allocation for each task or each container within a task ● The launch type to use (EC2 or Fargate) ● The Docker networking mode to use for the containers in your task ● The logging configuration to use (bridge, host, awsvpc, or none) ● Whether the task should continue to run if the container finishes or fails ● The command the container executes when it is started ● Volumes that should be mounted on the containers in a task ● The Task Execution IAM role that provides your tasks permissions to pull Docker images and publish container logs.

Memory Optimized (Types of EC2 Instances)

Designed to deliver fast performance for workloads that process large data sets in memory.

What is supported on dedicated hosts that are NOT supported on dedicated instances:

Dedicated hosts: Provides visibility on the number of sockets and physical cores, Allows you to consistently deploy your instances to the same physical server over time, Provides additional visibility and control over how instances are placed on a physical server, and has Bring Your Own License (BYOL) (all of which dedicated instances does not have)

Amazon Elastic Container Service (ECS) has two launch types for operation:

EC2 and Fargate. The EC2 launch type provides EC2 instances as hosts for your Docker containers. For the Fargate launch type, AWS manages the underlying hosts so you can focus on managing your containers instead. The details and configuration on how you want to run your containers are defined on the ECS Task Definition which includes options on networking mode.

What are the Different Types of EC2 Health Checks

EC2 instance health check, Elastic Load Balancer (ELB) health check, and Auto Scaling and Custom health checks

steps for creating an auto scaling group

First, select the launch configuration/template you'd like to use. Next, you define the VPC and subnets in which the auto scaling group will launch your instances in. You can use multiple Availability Zones and let EC2 Auto Scaling balance your instances across the zones. You can optionally associate a load balancer to the auto scaling group, and the service will handle attaching and detaching instances from the load balancer as it scales. Note that when you do associate a load balancer, you should use the load balancer's health check for instance health monitoring so that when an instance is deemed unhealthy by the load balancer's health check, the load balancer will initiate a scaling event to replace the faulty instance. Next, you define the size of the auto scaling group — the minimum, desired and the maximum number of instances that your auto scaling group should manage. Specifying a minimum size ensures that the number of running instances do not fall below this count at any time, and the maximum size prevents your auto scaling group from exploding in number. Desired size just tells the auto scaling group to launch this number of instances after you create it. Since the purpose of an auto scaling group is to auto scale, you can add cloud watch monitoring rules that will trigger scaling events once a scaling metric passes a certain threshold. Lastly, you can optionally configure Amazon SNS notifications whenever a scaling event occurs, and add tags to your auto scaling group.

Host network mode - (One of the four ECS Network Modes)

Host network mode bypasses the Docker's built-in virtual network and maps container ports directly to your EC2 instance's network interface. This mode shares the same network namespace of the host EC2 instance so your containers share the same IP with your host IP address. This also means that you can't have multiple containers on the host using the same port. A port used by one container on the host cannot be used by another container as this will cause conflict. This mode offers faster performance than the bridge network mode since it uses the EC2 network stack instead of the virtual Docker network.

Compute Optimized (Types of EC2 Instances)

Ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing, scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.

Simple Scaling (Types of EC2 Auto Scaling Policies)

Simple scaling relies on a metric as a basis for scaling. For example, you can set a CloudWatch alarm to have a CPU Utilization threshold of 80%, and then set the scaling policy to add 20% more capacity to your Auto Scaling group by launching new instances. Accordingly, you can also set a CloudWatch alarm to have a CPU utilization threshold of 30%. When the threshold is met, the Auto Scaling group will remove 20% of its capacity by terminating EC2 instances. When EC2 Auto Scaling was first introduced, this was the only scaling policy supported. It does not provide any fine-grained control to scaling in and scaling out.

Step Scaling (Types of EC2 Auto Scaling Policies)

Step Scaling further improves the features of simple scaling. Step scaling applies "step adjustments" which means you can set multiple actions to vary the scaling depending on the size of the alarm breach. When a scaling event happens on simple scaling, the policy must wait for the health checks to complete and the cooldown to expire before responding to an additional alarm. This causes a delay in increasing capacity especially when there is a sudden surge of traffic on your application. With step scaling, the policy can continue to respond to additional alarms even in the middle of the scaling event.

Target Tracking (Types of EC2 Auto Scaling Policies)

Target tracking policy lets you specify a scaling metric and metric value that your auto scaling group should maintain at all times. A limitation though - this type of policy assumes that it should scale out your Auto Scaling group when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out your Auto Scaling group when the specified metric is below the target value. Furthermore, the Auto Scaling group scales out proportionally to the metric as fast as it can, but scales in more gradually. Lastly, you can use AWS predefined metrics for your target tracking policy, or you can use other available CloudWatch metrics (native and custom).

An EKS cluster consists of two components:

The Amazon EKS control plane ● And the Amazon EKS nodes that are registered with the control plane

The Amazon EKS control plane

The Amazon EKS control plane consists of control plane nodes that run the Kubernetes software, such as etcd and the Kubernetes API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the cluster's EKS endpoint.

Nitro-based (Types of EC2 Instances)

The Nitro System provides bare metal capabilities that eliminate virtualization overhead and support workloads that require full access to host hardware. When you mount EBS Provisioned IOPS volumes on Nitro-based instances, you can provision from 100 IOPS up to 64,000 IOPS per volume compared to just up to 32,000 on other instances.

AWS Well-Architected Framework - Reliability

The ability of a workload to perform its intended function correctly and consistently when it's expected to. This includes the ability to operate and test the workload through its total lifecycle. ○ Design Principles ■ Automatically recover from failure■ Test recovery procedures■ Scale horizontally to increase aggregate workload availability ■ Stop guessing capacity■ Manage change in automation

AWS Well-Architected Framework - Security

The ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security. ○ Design Principles Implement a strong identity foundation Enable traceability Apply security at all layers Automate security best practices Protect data in transit and at rest Keep people away from data Prepare for security events

AWS Well-Architected Framework - Cost Optimization

The ability to run systems to deliver business value at the lowest price point. ○ Design Principles ■ Implement Cloud Financial Management■ Adopt a consumption model■ Measure overall efficiency■ Stop spending money on undifferentiated heavy lifting ■ Analyze and attribute expenditure

awsvpc mode - (One of the four ECS Network Modes)

The awsvpc mode provides an elastic network interface for each task definition. If you have one container per task definition, each container will have its own elastic network interface and will get its own IP address from your VPC subnet IP address pool. This offers faster performance than the bridge network since it uses the EC2 network stack, too. This essentially makes each task act like their own EC2 instance within the VPC with their own ENI, even though the tasks actually reside on an EC2 host. Awsvpc mode is recommended if your cluster will contain several tasks and containers as each can communicate with their own network interface. This is the only supported mode by the ECS Fargate service. Since you don't manage any EC2 hosts on ECS Fargate, you can only use awsvpc network mode so that each task gets its own network interface and IP address.

Scheduled RIs (Reserved Instance) (Types of EC2 Instances)

These are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month.

IaaS - "infrastructure-as-a-service"

These cloud computing services are the counterpart of purchasing your own hardware on-premises, minus the purchasing part. You rent them from the cloud provider and use them as if they were your own compute and storage devices.

PaaS- "platform-as-a-service"

These services are a bit similar with IaaS, but offer more utility and convenience for the customer. One example is a web hosting service, where you won't need to worry about the underlying hardware your website is running on, so you can focus on your website deployment and management instead.

SaaS - software-as-a-service"

These services totally remove the infrastructure part from the equation. You use these services according to the features and utility they offer to you. A good example is email.

None network mode - Default (One of the four ECS Network Modes)

This mode completely disables the networking stack inside the ECS task. The loopback network interface is the only one present inside each container since the loopback interface is essential for Linux operations. You can't specify port mappings on this mode as the containers do not have external connectivity. You can use this mode if you don't want your containers to access the host network, or if you want to use a custom network driver other than the built-in driver from Docker. You can only access the container from inside the EC2 host with the Docker command.

Receive Notification using Amazon SNS

To receive lifecycle hook notifications with Amazon SNS, you can use the AWS CLI to add a lifecycle hook. The key point here is that you need an SNS topic and an IAM role to allow publishing to that topic.

Aurora vs RDS

Type of database: Relational database Compatibility: Aurora is MySQL and PostgreSQL compatible RDS supports five database engines - MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server. Maximum storage: A: 128Tb RDS: 64 TB forMySQL, MariaDB, Oracle, and PostgreSQL engines, 16 TB for SQL Server engine Use cases: A: Enterprise applications - a great option for any enterprise application that uses relational database since it handles provisioning, patching, backup, recovery, failure detection, and repair. • SaaS applications - without worrying about the underlying database that powers the application, you can concentrate on building high-quality applications. • Web and mobile gaming - since games need a database with high throughput, storage scalability, and must be highly available. Aurora suits the variable use pattern of these apps perfectly.

Accelerated Computing (Types of EC2 Instances)

Uses hardware accelerators or co-processors to perform functions such as floating point number calculations, graphics processing, or data pattern matching more efficiently than on CPUs.

Configuring Notifications for Lifecycle Hooks

When a lifecycle hook occurs on an Auto Scaling group, it sends event logs to AWS CloudWatch Events (Amazon EventBridge), which in turn can be used to set up a rule and target to invoke a Lambda function.

Quorum-based replication (for Durable Data Storage)

combines synchronous and asynchronous replication by defining a minimum number of nodes that must participate in a successful write operation.

Asynchronous replication (for Durable Data Storage)

decouples the primary node from its replicas at the expense of introducing replication lag. This means that changes on the primary node are not immediately reflected on its replicas.

By default, ECS uses the following placement strategies:

When you run tasks with the RunTask API action, tasks are placed randomly in a cluster. ○ When you launch and terminate tasks with the CreateService API action, the service scheduler spreads the tasks across the Availability Zones (and the instances within the zones) in a cluster.

Bridge network mode - Default (One of the four ECS Network Modes)

When you select the <default> network mode, you are selecting the Bridge network mode. This is the default mode for Linux containers. For Windows Docker containers, the <default> network mode is NAT. Bridge network mode utilizes Docker's built-in virtual network which runs inside each container. A bridge network is an internal network namespace in the host that allows all containers connected on the same bridge network to communicate. It provides isolation from other containers not connected to that bridge network. The Docker driver handles this isolation on the host machine so that containers on different bridge networks cannot communicate with each other. This mode can take advantage of dynamic host port mappings as it allows you to run the same port (ex: port 80) on each container, and then map each container port to a different port on the host. However, this mode does not provide the best networking performance because the bridge network is virtualized and Docker software handles the traffic translations on traffic going in and out of the host.

Dedicated Hosts (Types of EC2 Instances)

You pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs. Support for multiple instance sizes on the same Dedicated Host is available for the following instance families: c5, m5, r5, c5n, r5n, and m5n. Dedicated Hosts also offers options for upfront payment for higher discounts.

Example of when to use a lifecycle hook on your auto scaling groups

during the scale-out event of your ASG, you want to make sure that new EC2 instances download the latest code base from the repository and that your EC2 user data has completed before it starts accepting traffic. This way, the new instances will be fully ready and will quickly pass the load balancer health check when they are added as targets. Another example is this - during the scale-in event of your ASG, suppose your instances upload data logs to S3 every minute. You may want to pause the instance termination for a certain amount of time to allow the EC2 to upload all data logs before it gets completely terminated.

Every VPC comes with a default what?

a default network ACL, which allows all inbound and outbound traffic. You can create your own custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules. Note that every subnet must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL. A network ACL can be associated with multiple subnets. However, a subnet can be associated with only one network ACL at a time.

What is a launch configuration?

a launch configuration is how you configure a single EC2 instance to be replicated in the event of a autoscaling

2 parts to an EC2 Auto Scaling Group

a launch configuration or template that will define your auto scaling instances, and the auto scaling service that performs scaling and monitoring actions.

Elastic Load Balancer (ELB) health check

a load balancer tests EC2 instances and if the instances are healthy (if HTTP health check succeeds within a 200 response code) it means it's InService or unhealthy its OutOfService. When configuring a health check you need to provide a specific port, protocol to use, and a ping path.

Warm Standby

a scaled-down version of a fully functional environment that is always running. For example, you have a subset of undersized servers and databases that have the same exact configuration as your primary, and are constantly updated also. Once disaster strikes, you only have to make minimal reconfigurations to re-establish the environment back to its primary state. Warm standby is costlier than Pilot Light, but you have better RTO and RPO.

A Reserved Instance has four instance attributes that determine its price:

a) Instance type b) Region c) Tenancy - shared (default) or single-tenant (dedicated) hardware. d) Platform or OS

how to scale horizontally (EC2)

adding more servers to the system. More servers mean that workload is distributed to a greater number of workers, which thereby reduces the burden on each server. When you scale horizontally, you need a service such as EC2 auto scaling to manage the number of servers running at a time. You also need an Elastic Load Balancer to intercept and distribute the total incoming requests to your fleet of auto scaling servers. Horizontal scaling is a great way for stateless servers, such as public web servers, to meet varying levels of workloads.

Auto Scaling and Custom health checks

all instances are assumed to be healthy unless EC2 Auto Scaling receives notification that they're unhealthy.

Data Lake

an architectural approach that allows you to store massive amounts of data in a central location so that it's readily available to be categorized, processed, analyzed, and consumed by diverse groups within your organization.

Scaling Horizontally

an increase in the number of resources. When scaling horizontally, you want your resources to be stateless and receive a well-distributed load of work.

Scaling Vertically

an increase in the specifications of an individual resource, such as to a higher instance type for EC2 instances.

Service Discovery (Best Practices when Architecting in the Cloud- Implement Loose Coupling)

applications that are deployed as micro-services should be discoverable and usable without prior knowledge of their network topology details. Apart from hiding complexity, this also allows infrastructure details to change at any time.

Data Warehouses

are a specialized type of relational database, which is optimized for analysis and reporting of large amounts of data.

Limitations to Remember for Amazon EC2 Auto Scaling Group

auto scaling groups are regional services and do not span multiple AWS Regions. You can configure them to span multiple Availability Zones, however, if you need to use multiple Regions for scaling horizontally, you will need to implement a different solution to achieve this result. The same goes for launch configurations and launch templates you create. They only exist within the Region you created them in. If you need to copy over your launch configurations and templates to another Region, simply recreate them in the desired target Region. Another thing to remember is when you've configured your EC2 Auto Scaling Group to spread your instances across multiple Availability Zones, you cannot use cluster placement groups in conjunction with this setup, since cluster placement groups cannot span multiple Availability Zones.

Instantiating Compute Resources (Best Practices when Architecting in the Cloud- Disposable Resources Instead of Fixed Servers)

automate setting up of new resources along with their configuration and code through methods such as bootstrapping, Docker images or golden AMIs.

Serverless Management and Deployment (Best Practices when Architecting in the Cloud- Use Automation)

being serverless shifts your focus to automation of your code deployment. AWS handles the management tasks for you.

Distributed Systems Best Practices (Best Practices when Architecting in the Cloud- Implement Loose Coupling)

build applications that handle component failure in a graceful manner.

First step in creating an EC2 instance?

choosing a base AMI or Amazon Machine Image.

how to scale vertically

increasing or decreasing the resources of a single server, instead of adding new servers to the system. Vertical scaling is suited for resources that are stateful or have operations difficult to manage in a distributed manner, such as write queries to databases and IOPS sizing in storage volumes. For example, if your EC2 instance is performing slowly, then you can scale up its instance size to obtain more compute and memory capacity. Or when your EBS volumes are not hitting the required IOPS, you can increase their size or IOPS capacity by modifying the EBS volume. Note that for some services such as EC2 and RDS, the instance needs to be stopped before modifying the instance size.

Asynchronous Integration (Best Practices when Architecting in the Cloud- Implement Loose Coupling)

interacting components that do not need an immediate response and where an acknowledgement that a request has been registered will suffice, should integrate through an intermediate durable storage layer.

Synchronous replication (for Durable Data Storage)

only acknowledges a transaction after it has been durably stored in both the primary storage and its replicas. It is ideal for protecting the integrity of data from the event of a failure of the primary node.

Security groups

operate on the instance layer. They serve as virtual firewalls that control inbound and outbound traffic to your VPC resources.

Relational Databases

provide a powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner.

Pilot Light (Disaster Recovery Methods)

quicker recovery time than backup and restore because core pieces of the system are already running and are continually kept up to date. Examples are your secondary production databases that are configured with data mirroring or data replication to the primary. Data loss is very minimal in this scenario for the critical parts, but for the others, you have the same RTO and RPO as backup and restore.

Well-Defined Interfaces (Best Practices when Architecting in the Cloud- Implement Loose Coupling)

reduce interdependencies in a system by allowing various components to interact with each other only through specific, technology agnostic interfaces, such as RESTful APIs.

Active redundancy

requests are distributed to multiple redundant compute resources. When one of them fails, the rest can simply absorb a larger share of the workload.

Network ACL and rules

rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it's applied regardless of any higher-numbered rule that might contradict it. And unlike security groups, you can create allow rules and deny permissions in NACL for both inbound and outbound rules

Partition (EC2 Placement Groups)

spreads your instances across logical partitions, called partitions, such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. A partition placement group can have partitions in multiple Availability Zones in the same Region, with a maximum of seven partitions per AZ. This strategy reduces the likelihood of correlated hardware failures for your application.

EC2 instance health check

status checks on EC2 instances are performed every minute and are built into EC2. you can create alarms that are triggered based on status checks. there are 2 types of status checks: system status checks and instance status checks

Spread (EC2 Placement Groups)

strictly places each of your instances across distinct underlying hardware racks to reduce correlated failures. Each rack has its own network and power source. A spread placement group can have partitions in multiple Availability Zones in the same Region, with a maximum of seven running EC2 instances per AZ per group.

system status checks and instance status checks

system status checks (AWS will fix the issue and instance status checks require your involvement to repair)

Backup and Restore (Disaster Recovery Methods)

take frequent backups of your most critical systems and data and store them in a secure, durable, and highly available location. Once disaster strikes, you simply restore these backups to recover data quickly and reliably. Backup and restore is usually considered the cheapest option, but also takes the longest RTO. Your RPO will depend on how frequent you take your backups.

RPO or Recovery Point Objective

the acceptable amount of data loss measured in time

Network ACLs operate on what level?

the subnet layer, which means they protect your whole subnet rather than individual instances. Similar to security groups, traffic is managed through the use of rules. A network ACL rule consists of a rule number, traffic type, protocol, port range, source of the traffic for inbound rules or destination of the traffic for outbound rules, and an allow or deny setting.

RTO or Recovery Time Objective

the time it takes after a disruption to restore a business process to its service level.

NoSQL Databases

trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. It uses a variety of data models, including graphs, key-value pairs, and JSON documents, and are widely recognized for ease of development, scalable performance, high availability, and resilience.

A security group rule is composed of:

traffic type (SSH, RDP, etc), internet protocol (tcp or udp), port range, origin of the traffic for inbound rules or destination of the traffic for outbound rules, and an optional description for the rule. Origins and destinations can be defined as definite IP addresses, IP address ranges, or a security group ID. If you reference a security group ID in your rule then all resources that are associated with the security group ID are counted in the rule. This saves you the trouble of entering their IP addresses one by one.

Multi-Site

un exact replicas of your infrastructure in an active-active configuration. In this scenario, all you should do in case of a disaster is to reroute traffic onto another environment. Multi-site is the most expensive option of all since you are essentially multiplying your expenses with the number of environment replicas. It does give you the best RTO and RPO however.

Graph Databases

uses graph structures for queries. ■ Search Functionalities ■ Search is often confused with query. A query is a formal database query, which is addressed in formal terms to a specific data set. Search enables datasets to be queried that are not precisely structured. ■ A search service can be used to index and search both structured and free text format and can support functionality that is not available in other databases, such as customizable result ranking, faceting for filtering, synonyms, and stemming.

access to the EC2 instance will need to be secured using what?

using one of your key pairs. Make sure that you have a copy of this key pair so that you'll be able to connect to your instance when it is launched. There is no way to reassociate another key pair once you've launched the instance. You can also proceed without selecting a key pair, but then you would have no way of directly accessing your instance unless you have enabled some other login method in the AMI or via Systems Manager.

Standby redundancy

when a resource fails, functionality is recovered on a secondary resource with the failover process. The failover typically requires some time before it completes, and during this period the resource remains unavailable. This is often used for stateful components such as relational databases.

When creating an EC2 instance--after you have chosen your AMI, what is the next step?

you select the instance type and size of your EC2 instance. The type and size will determine the physical properties of your instance, such as CPU, RAM, network speed, and more. There are many instance types and sizes to choose from and the selection will depend on your workload for the instance. You can freely modify your instance type even after you've launched your instance, which is commonly known as "right sizing".

Cluster (EC2 Placement Groups)

your instances are placed close together inside an Availability Zone. A cluster placement group can span peered VPCs that belong in the same AWS Region. This strategy enables workloads to achieve low-latency, high network throughput network performance.

AWS Well-Architected Framework - Operational Excellence

○ The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value. ○ Design Principles ■ Perform operations as code■ Make frequent, small, reversible changes ■ Refine operations procedures frequently ■ Anticipate failure■ Learn from all operational failures

AWS Well-Architected Framework - Performance Efficiency

○ The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. ○ Design Principles ■ Democratize advanced technologies ■ Go global in minutes■ Use serverless architectures■ Experiment more often ■ Consider mechanical sympathy

Some limitations of EC2 Placement Groups

● You can't merge placement groups. ● An instance cannot span multiple placement groups. ● You cannot launch Dedicated Hosts in placement groups. ● A cluster placement group can't span multiple Availability Zones.


Kaugnay na mga set ng pag-aaral

ECON HW #7-#11, ECON Trial Exam, Social Responsibility, ECON HW #3-#6, ECON HW#1 & HW#2, Economics Terms

View Set

Seven Steps of a Chemical Synapse

View Set

Business Law Chapter 24, Intellectual Property

View Set